top | item 46730436

Updates to our web search products and Programmable Search Engine capabilities

236 points| 01jonny01 | 1 month ago |programmablesearchengine.googleblog.com

194 comments

order

01jonny01|1 month ago

Google quietly announced that Programmable Search (ex-Custom Search) won’t allow new engines to “search the entire web” anymore. New engines are capped at searching up to 50 domains, and existing full-web engines have until Jan 1, 2027 to transition.

If you actually need whole-web search, Google now points you to an “interest form” for enterprise solutions (Vertex AI Search etc.), with no public pricing and no guarantee they’ll even reply.

This seems like it effectively ends the era of indie / niche search engines being able to build on Google’s index. Anything that looks like general web search is getting pushed behind enterprise gates.

I haven’t seen much discussion about this yet, but for anyone who built a small search product on Programmable Search, this feels like a pretty big shift.

Curious if others here are affected or already planning alternatives.

UPDATE: I logged into Programmable Search and the message is even more explicit: Full web search via the "Search the entire web" feature will be discontinued within the next year. Please update your search engine to specify specific sites to search. With this link: https://support.google.com/programmable-search/answer/123971...

zitterbewegung|1 month ago

I know that duckduckgo uses Microsoft Bing Custom search and honestly it is a much more robust system since you don't have to worry about Google axing it. https://www.customsearch.ai

01jonny01|24 days ago

Important Email Update from Google:

Dear Programmable Search Engine user,

Thank you for contacting us via the Web Search Products Interest Form. We have received your feedback and are actively reviewing the specific use cases you shared.

We are writing to share important details regarding the transition plan and the available solutions for your search needs.

1. For Unrestricted Web Search: Future Web Search Service

For partners requiring unrestricted "Search the entire web" capabilities, we are developing a new enterprise-grade Web Search Service. As you evaluate your future needs, please be aware of the commercial terms planned for this new service:

Pricing: USD $15 CPM (Cost Per Mille / 1,000 requests).

Minimum Commitment: A minimum monthly fee of USD $30,000 will apply.

We’ll release more information on this service later in 2026. Existing 'Search the entire web' engines remain functional until January 1, 2027.

2. For AI & Advanced Search: Google Vertex AI

We strongly encourage you to explore Google Vertex AI as another option for partners who need enterprise search and AI capabilities across 50 or fewer domains. Vertex AI offers powerful capabilities for:

Grounded Generation: Connecting your AI agents to your own data and/or to Google Search to provide accurate, up-to-date responses.

Custom Data Search: Building enterprise-grade search engines over your own data and specific websites.

This solution is available now and is designed to scale with your specific application needs.

Clarification on Current Service Status:

While you evaluate which path fits your business needs, please remember the timeline for your current implementations:

Existing Projects: If you have an existing Programmable Search Engine configured to "Search the entire web," your service will continue to function until January 1, 2027. You have the full year to plan your migration.

New Projects: As of January 20, 2026, new engines created in the Programmable Search Engine admin console are restricted to "Site Search Only" (specific domains only).

Later in 2026 we’ll provide you with more updates regarding the new Web Search Service and the means to express your desire to use it to power your web search experiences.

Sincerely,

Programmable Search Engine Team

saltysalt|1 month ago

I built my own web search index on bare metal, index now up to 34m docs: https://greppr.org/

People rely too much on other people's infra and services, which can be decommissioned anytime. The Google Graveyard is real.

salawat|1 month ago

It's been clear for the last decade that we have to wean ourselves off of centralized search indexes if only to innoculate the Net against censorship/politically motivated black holing.

I can only weep at this point, as the heroes that were the Silent and Greatest generations (in the U.S.), who fought hard to pass on as much institutional knowledge as possible through hardcore organization and distribution via public and University library, have had that legacy shit on by these ad obsessed cretins. The entirety of human published understanding; and we make it nigh impossible for all but the most determined to actually avail themselves of it.

raincole|1 month ago

> “search the entire web”

TIL they allowed that before. It sounds a bit crazy. Like Google is inviting people to repackage google search itself and sell it / serve with their own ads.

throwaway_20357|1 month ago

What are some of the niche search engines build on Google's index affected by this?

vagab0nd|1 month ago

Damn, I just wrote a note "search is free" in my aggressively-automate-everything-using-llms personal project plan.md. I guess not anymore.

Antibabelic|1 month ago

Relevant: Waiting for dawn in search: Search index, Google rulings and impact on Kagi https://news.ycombinator.com/item?id=46708678

mrweasel|1 month ago

This might be me reading it wrong, but isn't shutting down the full-web search going against the ruling mentioned in the Kagi post?

> Google must provide Web Search Index data (URLs, crawl metadata, spam scores) at marginal cost.

Maybe they're shutting down the good integration and then Kagi, Ecosia and others can buy index data in an inconvenient way going forward?

jpalepu33|1 month ago

This is a clear example of why building on proprietary APIs is risky for indie devs and small startups. I've seen similar patterns with Twitter's API restrictions and other platforms gradually closing down their ecosystems.

For anyone affected: consider this a forcing function to either: 1. Build your own lightweight search infrastructure (tools like Meilisearch, Typesense make this more accessible now) 2. Use adversarial interop via services like SerpAPI (though Google is already taking legal action there) 3. Pivot to specialized vertical search where you control the data sources

The real lesson here is the importance of owning your core value proposition. If your product's moat depends entirely on a third-party API that can be yanked away with 12 months notice, you don't really have a sustainable business.

Google is essentially saying: indie search is dead, pay enterprise prices or leave. This will probably accelerate the trend toward specialized, domain-specific search engines that don't rely on Google's index at all.

solarkraft|1 month ago

This will significantly impact (quite possibly kill) Startpage and Ecosia, who are effectively white-label Google, right?

What alternatives are there besides Bing? Is it really so hard that it’s not considered worth doing? Some of the AI companies (Perplexity, Anthropic) seem to have managed to get their own indexing up and running.

bicepjai|1 month ago

Why don’t we have something more “torrent-like” for search?

Imagine a decentralized network where volunteers run crawler nodes that each fetch and extract a tiny slice of the web. Those partial results get merged into open, versioned indexes that can be distributed via P2P (or mirrored anywhere). Then anyone can build ranking, vertical search, or specialized tools on top of that shared index layer.

I get that reproducing Google’s “Coca-Cola formula” (ranking, spam fighting, infra, freshness, etc.) is probably unrealistic. But I’d happily use the coconut-water version: an open baseline index that’s good enough, extensible, and not owned by a single gatekeeper.

I know we have common crawl, but small processing nodes can be more efficient and fresh

qingcharles|1 month ago

What's to stop someone poisoning the data, though? :(

pona-a|1 month ago

Look up YaCy. This might be close to what you imagine

jamesbelchamber|1 month ago

Are competing search indexes (Bing, Ecosia/Qwant, etc) objectively worse in significant ways, or is Google just so entrenched that people don't want to "risk it" with another provider (and/or preferences and/or inertia).

I suppose I'm asking whether this is actually a _good thing_ in that it will stimulate competition in the space, or if it's just a case that Google's index is now too good for anyone to reasonably catch up at this point.

thayne|1 month ago

Bing recently shut down their API product, which was already very expensive.

If you want programmatic access to search results there aren't really many options left.

01jonny01|1 month ago

The beauty about Google Programmable Search across the entire web is that it's free and users can make money by linking it their Adsense account.

Bing charge per query for the average user. Ecosia and Qwant use Bing to power their results, probably under some type of license, which results in them paying much less per query than a normal user.

SirHumphrey|1 month ago

I can manage fine with other search indexes for English language searches; weather that is because others got better or google got worse i cannot tell, though I suspect the latter.

But for searching in more niche languages google is usually the only decent option and I have little hope that others will ever reach the scale where they could compete.

Antibabelic|1 month ago

Bing's index is smaller than Google's, and anecdotally I get fewer relevant results when using it, particularly from sites like Reddit that have exclusive search deals with Google.

carlosjobim|1 month ago

Yes, for non English queries they are all rubbish. And that's billions of users.

consumer451|1 month ago

Dumb question:

I keep seeing posts about how ~"the volume of AI scrapers is making hosting untenable."

There must a ton of new full-web datasets out there, right?

What are the major hurdles that prevent the owners of these datasets from providing them to third parties via API? Is it the quality of SERP, or staleness? Otherwise, this seems like a potentially lucrative pivot/side hustle?

senko|1 month ago

> There must a ton of new full-web datasets out there, right?

Sadly, no. There's CommonCrawl (https://commoncrawl.org/) which still, sadly, far removed from "full-web dataset."

So everyone runs their own search instead, hammering the sites, going into gray areas (you either ignore robots.txt or your results suck), etc. It's a tragedy of the commons that keeps Google entrenched: https://senkorasic.com/articles/ai-scraper-tragedy-commons

Terretta|1 month ago

> the volume of AI scrapers is making hosting untenable

Aside from that potential, it's also not true.

A Pentium Pro or PIII SSE with circa 1998-99 Apache happily delivers a billion hits a month w/o breaking a sweat unless you think generating pages for every visit is better than generating pages when they change.

bennydog224|1 month ago

I built many products on Google PSE (Custom Search). Results were nowhere near as good as regular Google, but still useful. I usually needed to use another library to get the DOM content anyway. But it still was solid for grounding/checking data.

RIP, another one to the Google Graveyard.

jpadkins|26 days ago

your usage was in obvious violation of the terms of service. This is why we can't have nice things.

bovermyer|1 month ago

I'm curious about what it would take to build my own "toy" search engine with its own index. Anyone ever tried this?

marginalia_nu|1 month ago

Yeah that's where I started out in 2021. Been at it for almost 5 years now, last three of which full time. I'm indexing about 1.1 billion documents now off a single server.

Hard part is doing it at any sort of scale and producing useful results. It's easy to build something that indexes a few million documents. Pushing into billions is a bigger challenge, as you start needing a lot of increasingly intricate bespoke solutions.

Devlog here:

https://www.marginalia.nu/tags/search-engine/

And search engine itself:

https://marginalia-search.com/

(... though it operates a bit sub-optimally now as I'm using a ton of CPU cores to migrate the index to use postings lists compression, will take about 4-5 days I think).

Gigachad|1 month ago

Might find YaCy interesting. It’s meant to be a decentralised search engine where users scrape the internet and can search other users indexes in a kind of torrent like way.

I found it didn’t really work as a real search engine but it was interesting.

reddalo|1 month ago

Good luck scraping websites without being blocked, if you're not Google.

joelboersma|1 month ago

I've been occasionally working on a toy project that's basically "Google search in a TUI" that used this API. I was already planning on adding Brave Search as an option for a different backend, and I was heavily considering making it the default just because it's much easier to set up on the user's end. This is the straw that broke the camel's back.

nairboon|1 month ago

Regarding alternate search engines: I consider the idea of YaCy kind of interesting: a P2P search engine: https://yacy.net/

Although, it needs some more work and peers to be usable as a general-purpose search engine.

shevy-java|1 month ago

Google has consistently ruined its search engine in the last (almost) 10 years. You can find numerous articles about this, as well as videos on youtube (which is also controlled by google).

Not long ago they ruined ublock origin (for chrome; ublock origin lite is nowhere near as good and effective, from my own experience here).

Now Google is also committing towards more evil and trying to ruin things for more - people, competitors, you name it. We can not allow Google to continue on its wiched path here. It'll just further erode the quality. There is a reason why "killed by google" is more than a mere meme - a graveyard of things killed by google.

We need alternatives, viable ones, for ALL Google services. Let's all work to make this world better - a place without Google.

philistine|1 month ago

To me there are two eras of the Google Graveyard(tm). First, there's the we're a university research group with an ad company footing the bill era. That's the early Google era, and it was a consequence of its corporate structure. They valued new projects, market fit, profitability, and maintenance be damned.

We're in the second era. The era of the MBAs are shutting down the last remnants of openness the company ever had.

motoboi|1 month ago

This and agressive anti-bot at YouTube is Alphabet closing the AI data leaking

zoobab|1 month ago

Antitrust do not work against large companies.

Just dissolve them in acid.

marginalia_nu|1 month ago

This is the type of monopoly abuse these laws were designed to target, and antitrust laws actually do work against large companies.

If you actually enforce them.

Unfortunately, during the Reagan administration, political sentiment toward monopolies shifted and since then antitrust law has been a paper tiger at best.

cubefox|1 month ago

Is this perhaps to prevent ChatGPT, Claude and Grok to use Google Search? It would make sense for Google to keep that ability for Gemini.

01jonny01|1 month ago

I suspect its going to hurt the indie developers and small start-ups who do not have special licensing agreements.

direwolf20|1 month ago

They'll go adversarial interop through SerpAPI, just like Kagi does. SerpAPI will get the money instead of Google getting it.

jonplackett|1 month ago

Are search engines like Kagi completely screwed by this or is there a way for them to keep operating?

direwolf20|1 month ago

Kagi doesn't have a partnership with Google - they work under adversarial interoperability, stealing results from Google against their will, and paying some third-party to enable this. They'd like to simply pay Google, but Google doesn't want their money.

TiredOfLife|1 month ago

Kagi is backed by russia so they will be fine.

HPsquared|1 month ago

I had misread the title as "Google is ending (full-web search) for [aka in favour of] (niche search engines)"

The correct parsing is: "Google is ending (full-web search for niche search engines)"

dredmorbius|1 month ago

"Google will discontinue third-party niche search engine access to full-web search" would be far clearer.

Given that the title supplied is effectively editorialised, and the original article's title is effectively content-free ("Updates to our Web Search Products & Programmable Search Engine Capabilities"), my rewording would be at least as fair.

HN's policy is to try to use text from the article itself where the article title is clickbait, sensational, vague, etc., however. I suspect Google's blog authors are aware of this, and they've carefully avoided any readily-extracted clear statements, though I'll take a stab...

Here's the most direct 'graph from TFA:

Custom Search JSON API: Vertex AI Search is a favorable alternative for up to 50 domains. Alternatively, if your use case necessitates full web search, contact us to express your interest in and get more information about our full web search solution. Your transition to an alternative solution needs to be completed by January 1, 2027.

We can get a clearer, 80-character head that's somewhat faithful to that with:

"Google Search API alternative Vertex AI Search limited to 50 domains" (70 chars).

That's still pretty loosely adherent, though it (mostly) uses words from the original article. I'm suggesting it to mods via email at hn@ycominator.com; others may wish to suggest their own formulations.

mark_l_watson|1 month ago

Not directly covered by this blog, but for low cost and good performance the combination of gemini-3-flash with search grounding is hard to beat, at least for the many small experiments I use it for.

One thing touched upon in comments here: I never understood how it was proper for 3rd parties to scrape Google search results and reuse/resell them.

Really off topic, sorry, but I am surprised that more companies don’t build local search indices for just the few hundred web domains that are important to their businesses. I have tried this in combination with local (small and fast) LLMs and I think this is unappreciated tech: fast, cheap, and local.

sreekanth850|1 month ago

Never build a product with core feature depending on a third-party, you will eventually get fucked up for sure. always have a 70:30 rule for revenue where 70% is core independent features.

halapro|1 month ago

Soon you'll find that you cannot exist on the web without relying on third parties. Sometimes you'll even have trouble getting paid thanks to the painful existence of payment processors.

direwolf20|1 month ago

That's why I eschew HTTPS.

chromehearts|1 month ago

Is this about the little Google Search Bar that is present on some websites? Or am I mistaking something

01jonny01|1 month ago

Kind of, however the Google Search Bar present on website is usually there to search across their domain, the search results are limited to their domain e.g example.com/page1, example.com/page2. Google will carry on supporting this.

What they are ending is their support for websites to search across the entire web. The websites that search across the entire web are usually niche search engine websites.

thayne|1 month ago

Does this mean the !g bang will stop working in DuckDuckGo?

direwolf20|1 month ago

Doesn't it just redirect you to Google? So it will still work.

londons_explore|1 month ago

What examples are there of people using this?

01jonny01|1 month ago

There is literally thousands of independent search engines that use Programmable search to search the entire web. Many ISP providers use it on their homepage, kids-based search engines like wackysafe.com use it, also search engines that focus on privacy like gprivate.com etc

lighthouse1212|1 month ago

The 'Google Graveyard is real' sentiment captures something important: every dependency on a large platform is a loan that can be called in. The 34-million-document indie index project someone mentioned is the right response - own your core infrastructure. Easier said than done for whole-web search, but the same principle applies everywhere.

01jonny01|1 month ago

Much easier said than done, especially if you are serving users on scale.

YoungX|1 month ago

[deleted]