top | item 23947332

Building an Offline Page for Theguardian.com

25 points| pcr910303 | 5 years ago |theguardian.com | reply

12 comments

order
[+] Grumbledour|5 years ago|reply
I am not up to date on this kind of technology, but reading the article, this sound terrible! Not the guardians implementation, but the concept of locally "caching", though I would call it installing, executable code, that can then intercept network requests and basically do whatever it wants? Why would we think this is a good idea?

I am not surprised by Chrome doing something like this. But firefox too? I hope this will be opt-in.

[+] Narretz|5 years ago|reply
Service workers have been around for a while. As far as I know their capabilities are restricted to the domain that includes them. So the guardian website can only intercept network requests made from itself.
[+] geofft|5 years ago|reply
It can't do whatever it wants. It can make responses for requests that would go to that particular website without actually making the request, that's it - that is, the website can provide some code that makes local responses on its behalf for things it could respond to anyway. It can't intercept (or even see) anything for other sites. That's a significant difference from installing executable code in the form of a standalone binary.

I do agree, more broadly, that there's a place in this world for a s web-like thing that only lets sites provide text and minimal formatting and doesn't let them run any code at all, even in a constrained/sandboxed environment. (https://gemini.circumlunar.space/ seems like the most promising thing in this space.) But the web hasn't been that since 1995 when they released LiveScript.

[+] mhh__|5 years ago|reply
I've been reading an offline version of the guardian for my whole life, it's pretty nifty.
[+] sdiq|5 years ago|reply
If you clear your cache while your Internet connection is off, this won't work.