top | item 43529374

(no title)

offbyone | 11 months ago

Self-hosting breezewiki -- even on the same machine that you browse from -- gets neatly around the way fandom wikis block the breezewiki public nodes. I've got it self-hosted and now I never see that damn fandom interface.

discuss

order

immibis|11 months ago

We need a browser or extension that does this stuff automatically in the browser, so it looks like a normal browser request.

By "this stuff" I mean BreezeWiki, Invidious, Nitter, whatever that one for Reddit was called.

pfg_|11 months ago

Can't work for twitter because they block logged out requests (unless you're logged in)

Dracophoenix|11 months ago

There's already an extension called LibRedirect that can do this.

noirscape|11 months ago

Fully in the browser is unfortunately impossible/unworkable due to CORS (basically the same technique that prevents someone from easily faking a bank login page to leak your details/allowing for XSS attacks prevents you from doing this locally).

We're essentially reliant on these serverside solutions to proxy requests because that's the easiest way to do cross-origin requests without making whatever browser deity you annoyed that morning suddenly angry at you. (Extensions can mark domains to be allowed to run on, but this is restricted by the manifest, allowing for really easy whack-a-mole by server authors, not to mention the fact that each update would need signing from Mozilla/Google.)

Irritatingly, the same mechanism that's used to stop fraudulent sites can also be used as an easy deterrent against deshittification interfaces.

Jedd|11 months ago

Sounds like an easy way to solve both problems. Does it cache the fandom site, pull it all down, or it's just a redirector / re-theming front-end and makes calls out to fandom for each page load?

I skimmed the docs looking for an architecture overview, but couldn't see an answer to this. The low resource requirements cited for the docker container install suggest it's just doing page-loads and re-rendering them.

EDIT: So I've now set this up as via docker on my nomad cluster, and it's just proxying the pages and searches to and fro. It's a bit heavy - sitting at about 410MB while idle, but doesn't feel like there's any performance impact compared to hitting upstream directly.