Ask HN: Is there a search engine which excludes the world's biggest websites?
578 points| cJ0th | 5 years ago
Are there any earch engines which exclude or at least penalize results from, say, top 500 websites?
578 points| cJ0th | 5 years ago
Are there any earch engines which exclude or at least penalize results from, say, top 500 websites?
noad|5 years ago
There are so many cool things I remember reading on the web like 10-20 years ago that still exist that are so buried now on Google they might as well not exist. Nowadays searching any topic seems to always lead you to CNN and Microsoft and Facebook and other huge corporations. Search results are just becoming more sanitized and beige and meaningless every day.
dorkwood|5 years ago
My trick now is to use Twitter to discover interesting people, and follow them there. Granted, it's not a search engine, but it's at least given me the ability to discover weird things again.
Scoundreller|5 years ago
https://www.google.com/search?q=coronavirus
chris_f|5 years ago
I have a theory that web crawling alone is not the best way forward to find the most relevant results because of the volume of content continually being created, much of which is niche and sometimes dynamic.
Instead I believe linking together vertical search sources that have targeted information based on search intent will provide better results.
I created Runnaroo [0] for that purpose. If you search a programing question, it will pull traditional organic results from Google, but it will also directly query Stack Overflow for a deeper search.
[0] https://www.runnaroo.com
lazyjones|5 years ago
unknown|5 years ago
[deleted]
fedede|5 years ago
https://forms.gle/5KuTYVdYaMzRD2n78
ryandrake|5 years ago
Especially shopping. The endless stores are the worst part of search results. If I search for anything that remotely looks like a product, the results are just choked with store after store trying to sell me the thing. Awful.
sanqui|5 years ago
I haven't had that great results with it myself though.
smackay|5 years ago
Garbage in, garbage out. I guess. Still I like the idea of something to side-step the SEO perhaps with more effort they can make it work but relying on Google or any major search engine for the base results is the wrong way to go.
behnamoh|5 years ago
hopesthoughts|5 years ago
erikbye|5 years ago
For other engines you can use https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/ with this script https://greasyfork.org/en/scripts/1682-google-hit-hider-by-d...
atrudeau|5 years ago
terrycody|5 years ago
thekyle|5 years ago
https://millionshort.com/
b0ner_t0ner|5 years ago
Not just ads, but also ranked by the number of third-party cookies/tracking scripts a site has.
joshspankit|5 years ago
Very surprised where I see those these days, and they always make me run away.
tlarkworthy|5 years ago
I do a random city + documentary as the search term, it's taken me all over the world and seen some very strange things.
One of my favourites was Aarhus, which had a Danish language rapper proclaiming he was putting Aarhus on the global map (I have never heard of the city of Aarhus). https://youtu.be/WSZxuzgImLo They dis Copenhagen a lot too, lol. You get a more intimate YouTube experience with the low view videos
But I also seen amazing religious rituals. An excellent documentary on Karachi.
Because it's observable hq you can fork it and figure out your own algorithm for biasing the random.
totemandtoken|5 years ago
Specifically this quote: "The way to win here is to build the search engine all the hackers use. A search engine whose users consisted of the top 10,000 hackers and no one else would be in a very powerful position despite its small size, just as Google was when it was that search engine."
There has been a lot of grumblings about the state of search these days. Maybe the time is nigh for a new search engine?
tetris11|5 years ago
It will be limited, but still quite powerful, similar to the way that we can pick and choose different host file sources from the web.
ublaze|5 years ago
_xnmw|5 years ago
Before I knew about DEVONagent I would often just search multiple engines and sources trying to find something particular (e.g. a particular PDF) or unique results.
https://www.devontechnologies.com/apps/devonagent
lemonberry|5 years ago
petra|5 years ago
Does anybody knows of something similar for Windows or Linux ?
brightball|5 years ago
pavelmark|5 years ago
joshuaissac|5 years ago
https://github.com/sellomkantjwa/unpinterested
All it does is add -site:pinterest.com to the search bar for image results (can be configured to also do it for Web results), but it gets the job done.
pier25|5 years ago
user9429450|5 years ago
chaos_a|5 years ago
lucb1e|5 years ago
> In the early days of the web, pages were made primarily by hobbyists, academics, and computer savvy people about subjects they were interested in. Later on, the web became saturated with commercial pages that overcrowded everything else. All the personalized websites are hidden among a pile of commercial pages. [...] The Wiby search engine is building a web of pages as it was in the earlier days of the internet.
generalpass|5 years ago
For example, I submitted Pizza Hut's archived original web page [1], but it wasn't added.
Even for a search engine exposing niches, updating a directory manually will likely be too slow, unless the directory is maintaining a single nich (e.g., unladen airspeed of every species of swallow), but then we end up with some insane number of search engines and how to select which one?
[1] http://www.pizzahut.com/assets/pizzanet/home.html
bsanr2|5 years ago
For example, I enjoy weightlifting and strength sports. I did a search for "muscle", and every result but one was using the word "muscle" as a figurative metaphor. Barely anything about actual muscles. Searching "funk" was just as bad. One page about Motown and a LOT of midis.
EmilioMartinez|5 years ago
Aeolun|5 years ago
I’d have to submit every blog post?
ngold|5 years ago
mikekchar|5 years ago
Basically the idea is to have people band together and "recommend" links. You then do your normal spidering of the websites to create a search engine (or even just call through to a number of existing search engines). However, the ranking of the results is based on the weighting of the recommendations.
It's essentially a white list based on your own personal bubble. Of course this won't work in general because you will always get SEO creeps spamming recommendations. However, it gives you tools for working around those creeps. The average person probably won't be able to manage it, but power users probably will.
By not trying to solve the problem for everybody, it makes it easier to solve to problem for some people. Or at least that's my thesis :-) I might be wrong.
netsectoday|5 years ago
https://yacy.net/
If you're generous; you can make your index available to other P2P instances.
I wanted to run an API search the other week and was blown away with how quickly I could prop-up my own custom search portal (I didn't want to pay for API access to other search engines, and YaCy comes with a JSON and Solr endpoints).
I ran it locally to test my crawl filters, then pushed a private instance out to Digital Ocean to turn up the heat with the crawling. The only issue I had was the crawler would hit the max memory threshold on long crawls and the container would restart, but that was fixed by scaling up the box.
l72|5 years ago
While I typically still use RSS for reading music blogs, I find having the search engine is a great way to go back and find something or discover something new! Every time I find a new blog, I just add it as an index to yacy to crawl.
I think it'd be great to see people spinning up larger instances that are highly specialized. For example, maybe a search engine that is dedicated solely to sci-fi and only crawls high quality boards, personal sites and blogs, and skips all the spammy, seo-optimized sites.
crawlcrawler|5 years ago
chris_f|5 years ago
allwynpfr|5 years ago
nic-waller|5 years ago
I share that same desire to visit the web less travelled. I want to discover interesting sites that deserve to be bookmarked because they will never show up in a search engine.
77ko|5 years ago
hopesthoughts|5 years ago
severine|5 years ago
True "Interdimensional Cable" vibes.
tetris11|5 years ago
dangoljames|5 years ago
This was fire. If a topic were being discussed on the web, you could find it with this tool. Unfortunately, it did not fit the vision of the parasitic overlords who bred us to produce and consume for their benefit.
visarga|5 years ago
dennisy|5 years ago
You could add a bunch of heuristics such as size, number of links etc.
Maybe even train a classifier to select the “smaller” part of the web.
inopinatus|5 years ago
When I type “shoes”, it would give me: links for the functional and creative history of footwear, the taxonomy of shoes, methods of construction, current and historical footwear industry data, synonyms and antonyms, related terms and professions, the dictionary definition, and similar links related to secondary meanings (such as any protective covering at the base of an object, horseshoes etc). I’d also hope for a comedy link to a biography of Cordwainer Smith.
What I actually get, which I don’t want at all: pages and pages of shoe shopping.
The various means to exclude “top X sites” are the roughest possible heuristic in that direction, and throw out the baby with the bathwater (for example, a long-established manufacturer may well have an informational online exhibit)
Google has essentially failed me in its primary mission. Bing at least has the grace to admit they are here to “connect you to brands”. And sadly, right now, every other option is an also-ran.
In practice I use DDG, directed by !bangs towards known encyclopaedic or domain-specific sources. I am certain that I’m missing out.
seektable|5 years ago
* when you make a query to this knowledge base, it has a history of your prev searches / preferences (not google)
* it can propose variants of suggestions on what is your intent in this particular query - and make much more detailed queries (auto include/exclude keywords websites etc) to multiple sources (not only google, maybe anonymously)
* it can parse results from these sources and re-arrange them (use own rank system) according to the your preferences. In this system, you can explicitly say - I hate that, and I like that, and this will affect the behavior. Yes this is 'information bubble' but it is controlled by you and not by google!
* finally, this system may work in background and handle 'research' search queries. What I mean here: currently, Google is about instant search - it gives you results in milliseconds, and that's all. It cannot spend much computations for more precise, more intellectual check of content in links from the search results - it cannot do reasoning - and you have to do that by yourself: open links from 1-st page, and close most of them immediately b/c they are not relevant for you, go to 2-nd page and so on. It would be cool if most of this could be automated - with modern natural language processing approaches and old-school prolog-like reasoning this is real and not a fantastic from sci-fi.
My vision that this kind of search assistant cannot be SaaS / closed source. It is about the freedom - and thus this should be open source / self-hosted app that can be deployed on PC or on cloud VM - but hosting should be controlled by end-users, not companies.
I don't know if something like this ever exist. If not, maybe its time to create it.
atlantique|5 years ago
text_exch|5 years ago
Discovering unknown parts and blogs on the internet is one of the enduring goals of a newsletter that I run [1], which provides a single link to an interesting article every day, usually by lesser-known authors and blogs across the internet.
[1] www.thinking-about-things.com
hopesthoughts|5 years ago
011-video|5 years ago
On a daily basis your brain use shortcut to get to the point. Open Firefox (of course) ALT+B. Then add a new bookmark for instance :
Name : Stack Overflow
Location : https://stackoverflow.com/search?q=%s
Tags :
Keyword : st
Now if you want to search "javascript timer", just type : st javascript timer
Add "%s" to all your favorites website search url.
Example : https://en.wikipedia.org/wiki/%s
To discover some new website content, apply the same trick to Hacker news, Reddit or any RSS River.
Voila, bye bye GG.
NateEag|5 years ago
See this example of filtering Stack Overflow out of search results:
https://www.google.com/search?q=loop+over+array+items+in+jav...
1f60c|5 years ago
brentis|5 years ago
Popularity, Relevance, Age, Type, etc. type could be blog, forum, site, or video. Or like it used to be.
busymom0|5 years ago
sneeuwpopsneeuw|5 years ago
Then I use Violentmonkey an open source js/css injector to inject this user script: https://greasyfork.org/nl/scripts/1682-google-hit-hider-by-d... This will block specific domains for you in google, yahoo, duckduckgo etc. I use this to block domains like Quora, sourceforge, cnet and softonic.
The nice thing about this script is that you can permaban domain you know are junk and they will completely be removed or you can ban a domain like commercial websites. When you ban something it is not removed from google or duckduckgo but it only shows the title in light gray, Im currently experimenting with this on some mayor webstores so I can not really say if this may help you but It can be a good start.
(edit) I saw some people say why this was not possible before. Google allowed you to block domains and website a few years ago, but they removed this feature. Duckduckgo never allowed you to do that because that would mean that you will have a cookie that remembers your preferences and that is against there principles.
1f60c|5 years ago
I knew about !bangs, but I didn't know you could put them anywhere in the query (e.g. "hello !g world" searches Google for "hello world"). This is going to save me a lot of time on mobile. Thanks!
unknown|5 years ago
[deleted]
greglindahl|5 years ago
Implementing this properly involves having your own search index. And that's pretty expensive.
bamboozled|5 years ago
Edit: Maybe it’s the first million results? I use it to find obscure things sometimes.
mmsimanga|5 years ago
DavidPiper|5 years ago
A search engine that returns results whose pages weigh in under a certain size.
From the comments it seems most of the "cruft" filling up Google results are newer web apps, generally JS-heavy and advertising-heavy, etc.
If you had a filter for pages with (e.g.) < ABC kb of JS, < XYZ external links (excluding img tags), I feel like there'd be a good chance that the "old" web and the "unknown" web would bubble to the top.
There are plenty of false positives (particularly for "small" forums build with modern JS apps, etc), but it could be one of many filtering tools to achieve better search results.
ngold|5 years ago
turnipla|5 years ago
Now there are a few extensions that do that, but obviously they only hide the results from each page, so sometimes you will see pages with 2 results, if any at all.
rozab|5 years ago
methou|5 years ago
petra|5 years ago
But i find the search is at a much lower quality than Google.
dexen|5 years ago
--
[1] https://demainstream.com/
bmd3991|5 years ago
chongli|5 years ago
dddddaviddddd|5 years ago
peel40|5 years ago
[search term] -google -youtube -facebook ... -top100website and it should work.
I found a list of the top 1m alexa websites here:
http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
An add-on with that list should do the work.
beachy|5 years ago
- there's probably a pretty low limit for size of Google queries, you'll likely hit it quickly
- you won't be able to search for e.g a story about YouTube censoring some content
coronadisaster|5 years ago
bhartzer|5 years ago
It’s custom google search results, but since it’s excluding .com, .net, .org etc then you probably won’t see any of the large sites there.
It’s also interesting to see which sites have been built in the last few years, as the new gTLDS haven’t been around that long.
rkagerer|5 years ago
loosetypes|5 years ago
I was intrigued by how dorkweed’s approach has changed over time, as described in a reply to a sibling comment.
As general search results get watered down and rotten tomato inflation maybe trends towards reflecting company interests rather than my interest-level, maybe it’s worth re-evaluating the vetting avenues we take as users.
Here’s mine: for games and shows I’ve recently found myself using quantity of fan-videos on YouTube as a proxy for quality. So far it’s been a decent means to find cult followings for something I otherwise wouldn’t necessarily hear about.
Obviously this approach has its flaws - and is subject to financial perversions to an extent - but I figure if enough people genuinely want to pay tribute to a work, it might be worth checking out.
bluishgreen|5 years ago
Personal trick: I follow reaction video blogs, and if they are reacting to something then it is usually worth watching. But reaction blogs are only for short videos and other short form content.
ChrisMarshallNY|5 years ago
I find that the YouTube sidebar is useful for me to find interesting music. I have eclectic tastes, and Google seems to have figured that out. I don't mind.
I suspect that it would be possible to create a custom API query to Google that would have a "blacklist."
smsm42|5 years ago
I think they try to do exactly what you ask, but I haven't used them extensively so don't know how good are they.
abarrettwilsdon|5 years ago
Seeing folks mention the NOT operator (-). It's quite powerful! For example, you can do:
intext:"Powered by intercom" -site:intercom.com will find all the sites that use the Intercom widget
or ~blog bread baking -inurl:checkout -intext:checkout will find bread blogs (or similar) without commercial intent
I put together a list of the two dozen or so most useful templates of this, for folks who are interested: https://www.alec.fyi/dorking-how-to-find-anything-on-the-int...
dhbradshaw|5 years ago
Each session would have an updatable list of sites that are favored, whitelisted or blacklisted for a particular class of search.
maayank|5 years ago
Anyone reading this, please post if you find any
crocodiletears|5 years ago
1. Looking for niche domain or institutional/social knowledge produced by experts or insiders for an informed audience that isn't necessarily available in a scientific journal.
Especially with respect to the social sciences and literary analysis, there's a wealth of intelligent commentators that don't surface well on Google without very specific search terms, and the willful subtraction of domains like quora, medium, and tumblr.
They're usually contained on poorly maintained WordPress sites that the author has long-since forgotten about, or as invalid, handcoded html docs hidden in the personal subdomains of university professors and students.
2. Finding online communities that aren't a part of Reddit or a similarly prominent platform
chasd00|5 years ago
technotarek|5 years ago
https://attic.city/
Currently for three product tiers (furniture, home decor, and fashion/clothing) in 14 major US markets, where stores within ~100 miles or a ~2 hour drive are considered as part of the market.
Disclaimer: I'm one of the founders.
fomine3|5 years ago
alibaba_x|5 years ago
amelius|5 years ago
Google says they need our information to "improve our experience", but we can't tell them what to omit ...
rsoto|5 years ago
fedede|5 years ago
https://forms.gle/5KuTYVdYaMzRD2n78
jsgo|5 years ago
pengstrom|5 years ago
third_I|5 years ago
21xhipster|5 years ago
Its kinda new so it excludes kinda everything :-) But you can make it work better :-)
https://ipfs.io/ipfs/QmQ1Vong13MDNxixDyUdjniqqEj8sjuNEBYMyhQ...
Aeolun|5 years ago
freefriedrice|5 years ago
The problem I see on DDG & Google is having to scroll 5-10 pages of utter SEO nonsense.
"Do you have a question about ____? Many ask about _______. ____ is a common question, here the are we some answer. [sic]".
Just utter garbage pages.
It used to be just with recipes or medical questions, but now it feels like most everything that is a general query.
dredmorbius|5 years ago
piusp|5 years ago
wyck|5 years ago
If anyone noticed during the first couple days of covid, google search was free from large media results, the algorithm reverted back to how it was years ago and it was such a breath of fresh air. Of course they fixed the algo immediately, it went back to only showing curated media results..there was an anon google employee who posted why this occurred.
fermienrico|5 years ago
When SEC laws, shareholder interest, quarterly performance and stock volatility comes into play, corporations become this mindless soulless monster that will devour everything in its way and fuck consumers in every which way.
Democratization of funds from central authority to public creates disincentives and the shareholders don’t give a shit about many auxiliary things such as environmental concerns. Bottom line always matters.
It’s not just google but any public corporation. Can you imagine SpaceX being able to operate with the same passion with shareholder interests?
koheripbal|5 years ago
om42|5 years ago
chrischen|5 years ago
kian|5 years ago
pkamb|5 years ago
Especially removing Quora, Pinterest, and aggregation/reposting/SEO/affiliate blogs.
And all "product" images with a white background. Only show real photographs.
dredmorbius|5 years ago
Cyclone_|5 years ago
social_quotient|5 years ago
Just a thought experiment, curious what others think.
wmnwmn|5 years ago
dluan|5 years ago
rdtwo|5 years ago
ErikAugust|5 years ago
tokyokawasemi|5 years ago
unknown|5 years ago
[deleted]
known|5 years ago
moreWeed|5 years ago
Nevada-Smith|5 years ago
[1] https://scholar.google.com/
blondin|5 years ago
can google allow us to exclude certain sites? i was surprised to see w3school showing up above official documentations for pandas and numpy. this is simply ridiculous!!
jotm|5 years ago
badrabbit|5 years ago
saadalem|5 years ago
A search engine that shows only urls that are not indexed b google / another one that gives you the websites with lower pagerank
bmwracer|5 years ago
hopesthoughts|5 years ago
jungletime|5 years ago
"If you don’t read the newspaper you are uninformed; if you do read the newspaper you are misinformed." Mark Twain
lihaciudaniel|5 years ago
unknown|5 years ago
[deleted]
dangoljames|5 years ago
corndoge|5 years ago
thoughtstheseus|5 years ago
runawaybottle|5 years ago
egberts1|5 years ago
coronadisaster|5 years ago
citizenpaul|5 years ago
aiisjustanif|5 years ago
starfallg|5 years ago
dpau|5 years ago
wojtczyk|5 years ago
Upvoter33|5 years ago
aiisjustanif|5 years ago
martin-adams|5 years ago
graycat|5 years ago
> Ask HN: Is there a search engine which excludes the world's biggest websites?
> Discovering unknown paths of the web seems almost impossible with google et al..
> Are there any earch engines which exclude or at least penalize results from, say, top 500 websites?
Let's back up a little and then try for an answer:
Some points:
(1) For some qualitative exclamation, there is a LOT of content on the Internet.
(2) There are in principle and no doubt so far significantly in practice a LOT of searches people want to do. The search in the OP is an example.
(3) Much like in an old library card catalog subject index, the most popular search engines are based heavily on key words and then whatever else, e.g., page rank, date, etc.
So: (1) -- (3) represent some challenges so far not very well met: In particular, we can't expect that the key words, etc. of (3) will do very well on all or nearly all the searches in (2) for much of the content in (1).
And the search in the OP is an example of a challenge so far not well met.
Moreover, the search in the OP is no doubt just one of many searches with challenges so far not well met.
Long ago, Dad had a friend who worked at Battelle, and IIRC they did a review of information retrieval that concluded that keyword search covers only a fraction, maybe ballpark only 1/3rd, of the need for effective searching. And the search in the OP is an example of what is not covered because the library card catalog did not index size of the book or Web site! :-)!
Seeing this situation, my rough, ballpark estimate has been that the currently popular Internet search engines do well on only about 1/3rd of the content on the Internet, searches people want to do, and results they want to find.
So, I decided to see what could be done for the other 2/3rds.
I started with some not very well known or appreciated advanced pure math; it looks like useless, generalized abstract nonsense, but if calm down, stare at it, think about it, ..., can see a path for a solution. Although I never thought about the search in the OP until now, in principle the solution should work also for that search. Or, the math is a bit abstract and general which can translate in practice to doing well on something as varied as the 2/3rds.
Then for the computing, I did some original applied math research.
Using TeX, I wrote it all up with theorems and proofs.
So, the project is to be a Web site. While in my career I've been programming for decades, this was my first Web site. I selected Windows and .NET, and typed in 100,000 lines of text with 24,000 statements in Visual Basic .NET (apparently equivalent in semantics to C# but with syntactic sugar I prefer).
The software appears to run as intended and well enough for significant production.
I was slowed down by one interruption after another, none related to the work.
But, roughly, ballpark, the Web site should be good, or by a lot the best so far, for the 2/3rds and in particular for the search in the OP.
So, for
> Ask HN: Is there a search engine which excludes the world's biggest websites?
there's one coded and running and on the way to going live!
I intend to announce an alpha test here at HN.
defen|5 years ago
Before you can even do a keyword search, you obviously need an intent to do so. But that means keyword search is pretty useless when you don't know what you don't know.
Encoding that intent...maybe doesn't matter for common searches, but everyone has heard of the concept of "Google-Fu". English text is a pretty lossy medium compared to the thoughts in people's heads...Shannon calculated 2.62 bits per English letter, so the space of possibly-relevant sites for almost any keyword is absolutely enormous (e.g. there are about 330,000 7-letter english keyword searchs...distributed across how many trillions of pages, not even counting "deep web" dynamically generated ones?). So we punt on that and use the concept of relevance for sorting results, and in practice no one looks beyond the first 10. I don't know what an alternate encoding might look like though
Gollapalli|5 years ago
notaphilosopher|5 years ago
- health search that excludes sellers, wellness and snake-oil websites
- news search that excludes conspiracy theories, magical thinking, political operatives, and paid bloggers
- image search by similarity, similarity to an uploaded picture/s, words, or description
- media and warez search engine that excludes link-spam and malware sites
- complex queries search because none of them do it well
- anonymity
- shopping search that kicks out disreputable sellers and phony store-fronts
- mapping like OSM but fast, practical with an app, and detail-accessible
- monetize using affiliate links that don't affect ranking
- semi-curated results (domain reputation-ranked voting)
- related pages
- inbound/outbound links search
- archive.org integration &| history page caching
- documented query syntax
- query within results
- quick query history results navigation
- keyword alerts
- keyboard shortcuts that always work
coronadisaster|5 years ago
[deleted]
burmer|5 years ago