top | item 45198538

(no title)

luizfelberti | 5 months ago

I was trying to do this in 2023! The hardest part about building a search engine is not the actual searching though, it is (like others here have pointed out), building your index and crawling the (extremely adversarial) internet, especially when you're running the thing from a single server in your own home without fancy rotating IPs.

I hope this guy succeeds and becomes another reference in the community like the marginalia dude. This makes me want to give my project another go...

discuss

order

mhitza|5 months ago

You might want to bookmark https://openwebsearch.eu/open-webindex/

While the index is currently not open source, it should be at some point. Maybe when they get out of the beta stage (?) details are yet unclear.

3RTB297|5 months ago

You know, it's possible the cure to an adversarial internet is to just have some non-profit serve as a repo for a universal clearnet index that anyone can access to build their own search engine. That way we don't have endless captchas and anubis and Cloudflare tests every time I try and look for a recipe online. Why send AI scrapers to crawl literally everything when you're getting the data for free?

I'll add it to the mile-long list of things that should exist and be online public goods.

moduspol|5 months ago

Is the common crawl usable for something like this?

https://commoncrawl.org

chiefsearchaco|5 months ago

I'm the creator of searcha.page and seek.ninja, those are the basis of my index. The biggest problem with ONLY using that is freshness. I've started my own crawling too, but for sure common crawl will backfill a TON of good pages. It's priceless and I would say common crawl should be any search engines starting point. I have 2 billion pages from common crawl! There were a lot more but I had to scrub them out due to resources. My native crawling is much more targeted and I'd be lucky to pull 100k but as long as my heuristics for choosing the right targets it will be very high value pulls.

giancarlostoro|5 months ago

Most likely it is, the issue then becomes being able to store and afford the storage for all the files.

wordpad|5 months ago

Why can't crawling be crowd sourced? It would solve ip rotation and spread the load

Poomba|5 months ago

That’s how residential proxies work, in a perverse way

chiefsearchaco|5 months ago

Common crawl sort of serves this function. I use it. It's a really good foundation.

6510|5 months ago

The crawl seems hard but the difference between having something and not having it is is very obvious. Ordering the results is not. What should go on page 200 and do those results still count as having them?

ge96|5 months ago

The IP thing is interesting, I was trying to make this CSGO bot one time to scrape steam's prices and there are proxy services out there you rent, tried at least one and it was blocked by steam. So I wonder if people buy real IPs.

kccqzy|5 months ago

Yeah people buy residential IPs on the black market. They are essentially infected home PCs and botnets.