top | item 26519646

Ask HN: Why saving webpages on hard disk has not got better?

28 points| behnamoh | 5 years ago | reply

I use Firefox and every time I try to save a webpage, the first try fails! The second one works, but then the saved file does not look like the one you'd see online. There are workarounds such as *.mhtml extensions, but I wonder why browsers have not got better at saving webpages locally.

I've used Chrome too, same problem.

12 comments

order
[+] asaddhamani|5 years ago|reply
Use SingleFile. It inlines any CSS and images and your saved page will look pretty close to the actual page.
[+] account-5|5 years ago|reply
I tend to use singlefile extension.

I am looking for something that saves just the JSON responses from a website. I can see the JSON in devtools network tab but can't extract it. I'm not a webdev so it may be easy I just don't know how it's done.

[+] S4M|5 years ago|reply
I use wget (on the command line) to save a webpage. It has an option to save recursively all the links on the page as well, or only the links from a certain domain.
[+] ggm|5 years ago|reply
The odd thing is, either in shmem/mmap, or on disk cache, is most of the fetched state. A simple "tee" of the data out of QUIC would replay the load of the disparate mark-up elements.

It's all there. The problem is deciding how to add code to save it in a rational reloadable manner.

Compared to eg ZFS snapshot: low cost copy on write, saved state. (Admittedly of data on disk, which is the goal here, but then the browser cache is on disk)

[+] spsphulse|5 years ago|reply
I'm happy user of Chrome extension called savePageWe