(no title)
pierrefar | 2 years ago
Even worse, they state that Brave search won't index a page only if other search engines are not allowed to index it. It is morally not their right to make that call. A publisher should have full control to discriminate which search engine indexes the website's content. That's the very heart of why the Robots Exclusion Protocol exists, and Brave is brazenly ignoring it.
Even worse than that, the Brave search API allows you (for an extra fee) to get the content with a "license" to use the content for AI training? Who allowed them the right to distribute the content that way?
I wrote about all this here:
https://searchengineland.com/crawlers-search-engines-generat...
and more references elsewhere in this thread:
https://news.ycombinator.com/item?id=36989129
Amusingly, while I was writing my article, this got posted to their forums, asking about how to block their crawler:
https://community.brave.com/t/stop-website-being-shown-in-br...
No reply so far.
yreg|2 years ago
If you post something to the open web, what's it to you who reads it and how? You can block some IPs but that's about it.
I don't know if Brave has a knowledge graph - if they do, I would understand objecting if they filled it in with “stolen” content. But I don't see what's the problem with search.
By the way, isn't everyone's favourite archive.is doing the same thing?
I have no strong opinion on this, curious to hear counter arguments.
CaptainFever|2 years ago
jaharios|2 years ago
> A publisher should have full control to discriminate which search engine indexes the website's content
If you want someone to not see what you publish block him yourself. Also why would you want to do that? Do you want google to own the web or something?
pierrefar|2 years ago
I share your concern about Google having this much power, and I'd add that Microsoft Bing is equally bad but gets away with it because they're smaller. Still, the final decision about which search engine indexes a website is purely the publisher's.
waithuh|2 years ago
bastawhiz|2 years ago
It simply doesn't sound right to say which tool a user can use. It's literally the same as arguing that you should be able to block Firefox from accessing your website and it's Mozilla's fault that they don't respect your wishes as a webmaster to block Firefox exclusively. Or that a VPN doesn't publish its IP addresses so that you can block it. Or a screen reader that processes the text to speech in a way that you disagree with.
Philosophically it seems intuitive to say "I should be able to block a third party that is abusing my site" but it's ignoring the broader context of what "open web" and "net neutrality" actually mean.
I run a service for podcasters. There are podcast apps and directories that either ignorantly make unnecessary requests for content or have software bugs that cause redownloads. I could trivially block them, but I don't because doing so penalizes the end user who is ultimately innocent, rather than the badly behaved service operator. The better solution is primitives like rate limiting, which I use liberally. Plus, blocking anyone literally has a direct effect of incentivizing centralization on Apple, Spotify, etc. and making the state of open tech in podcasting even worse.
> the Brave search API allows you (for an extra fee) to get the content with a "license" to use the content for AI training? Who allowed them the right to distribute the content that way?
I don't think there's any court at this point that would back you up that freely published content annotated with full provenance cannot be scraped and published for a fee. Services like this have existed for decades. If you don't want your content scraped, put it behind a login. Especially considering this only applies when you allow other search engines and if you think Google and Bing aren't using your content to train AI, you're off your rocker.
sangnoir|2 years ago
1. User agents should identify themselves
2. A crawler is not a User agent - it's an agent for Brave
>I don't think there's any court at this point that would back you up that freely published content annotated with full provenance cannot be scraped and published for a fee.
You can't end-run copyright like this: just because something is publicly available doesn't mean anyone can redistribute it. Look at the legal issues & cases relating to Library Genesis.
cvalka|2 years ago
waithuh|2 years ago
tympious|2 years ago
What if I consider (some or any of) my ideas to be un-indexable, not directly suitable to representation in any hierarchy other than those I may set them in?
vGPU|2 years ago
To me as a search engine end user, this kind of behavior is undesirable. Why would I want a website to selectively degrade my experience because of my choice in search engine or browser?
Brings back horrible flashbacks of “this website is only compatible with IE6”.
1vuio0pswjnm7|2 years ago
pierrefar|2 years ago
Also, these search crawls by the browser do not identify themselves beyond the Brave standard UA header, namely a plain Chrome user-agent string.
1vuio0pswjnm7|2 years ago
How many Chrome users have opted in to sending data to Google.
Sometimes uninformed consent is not actually consent. These so-called "tech" companies love to toe that line.
unknown|2 years ago
[deleted]
unknown|2 years ago
[deleted]