Screenshotting or PDFing of a website is an increasingly important archiving tool, to supplement wget. I've come across a lot of websites that won't render any content if not connected to a live server.
I couldn't agree more. I wish more sites would load without needing multiple seconds of JS execution and AJAX. One of my TODOs is to get full-page screenshots working as well.
Agreed. I research new media and archive.org is invaluable to me. I worry that current web sites won't be able to be preserved. (much like many of the flash sites and real audio of the past are largely gone.)
What version of Google Chrome do you need for the PDF export to work? I tried it on 58.0.3029.96 (Linux) and this does nothing (no error messages, it just quits without writing any files):
Edit: I'm completely baffled that such widely used software as Google Chrome can have this written in the man page: "Google Chrome has hundreds of undocumented command-line flags that are added and removed at the whim of the developers."
Highly recommend switching to wpull (https://github.com/chfoo/wpull), which was built as a wget replacement. It's what grab-site uses, which is a successor to ArchiveTeam's ArchiveBot.
"grab-site is made possible only because of wpull, written by Christopher Foo who spent a year making something much better than wget. ArchiveTeam's most pressing issue with wget at the time was that it kept the entire URL queue in memory instead of on disk. wpull has many other advantages over wget, including better link extraction and Python hooks."
Use zotero and you have your own personal Pocket with snapshots. In addition, you can add tags, organize stuff into folders, etc.
https://www.zotero.org/
Could you add an option to either add tagging, or separate the tagged items into folders?
e.g. "programming/", "docker/" etc, I often find myself digging through my Pocket archive trying to find that one article I found 6 months ago and it gets incredibly annoying
I like having the sites by timestamp because they're guaranteed to be unique, and it makes traversing them easy. I'd be happy to add a tag column to the index though, which you could use with Ctrl+F to find articles. https://github.com/pirate/pocket-archive-stream/issues/1
I've been thinking along those very same lines for a long time (this project makes me wish for more free time).
I have half a mind to fork this and add something like https://github.com/internetarchive/warcprox, or at the very least walk through the generated HTML and brute-force inline all assets as a first pass :)
Can one automate extensions through headless chrome ? then you might be able to trigger WarCreate instead (It will be more efficient to run the pocket export urls through WAIL though - this should give you the warcs you want)
Or EML/MHT. It's the format the email programs use to store the HTML mail incl all pictures, JS, CSS, ... in one plain text file. IE 9-11 also supports that format (file -> save as...) but calls it MHT?
You see something is flawed in Redux at the point you have to pass strings (uppercase constants defined somewhere) around, import them in every file, pass them as identifiers of what you should do with each data.
Slowing down the inevitable tide of https://en.wikipedia.org/wiki/Link_rot. When I cite blog posts or want to share sites that have gone down, I can swap out the links for my archived versions.
this is really cool! I always had in mind a project where you save every page you visit, and somehow expose them in the future to know what you visit and maybe remembering you important stuff based on some heuristic.
[+] [-] vijucat|9 years ago|reply
Just for articles, mind you, not entire websites.
[+] [-] cJ0th|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] jl6|9 years ago|reply
[+] [-] nikisweeting|9 years ago|reply
[+] [-] driverdan|9 years ago|reply
A better option is to render a page with JS turned on and save the resulting HTML.
[+] [-] robzyb|9 years ago|reply
I.e. DOM copy > screenshot > wget?
[+] [-] jccalhoun|9 years ago|reply
[+] [-] frik|9 years ago|reply
Well, I took a screenshot, better than nothing.
[+] [-] avian|9 years ago|reply
$ google-chrome --headless --disable-gpu --print-to-pdf 'http://example.com'
Edit: I'm completely baffled that such widely used software as Google Chrome can have this written in the man page: "Google Chrome has hundreds of undocumented command-line flags that are added and removed at the whim of the developers."
[+] [-] nikisweeting|9 years ago|reply
https://developers.google.com/web/updates/2017/04/headless-c...
[+] [-] edibleEnergy|9 years ago|reply
[+] [-] toomuchtodo|9 years ago|reply
"grab-site is made possible only because of wpull, written by Christopher Foo who spent a year making something much better than wget. ArchiveTeam's most pressing issue with wget at the time was that it kept the entire URL queue in memory instead of on disk. wpull has many other advantages over wget, including better link extraction and Python hooks."
[+] [-] nikisweeting|9 years ago|reply
[+] [-] throw98987|9 years ago|reply
[+] [-] nikisweeting|9 years ago|reply
[+] [-] ticoombs|9 years ago|reply
Also has a pocket import feature.
[1] https://wallabag.org/en
[+] [-] antman|9 years ago|reply
wget -nc -np -E -H -k -K -p -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4' -e robots=off
[+] [-] nikisweeting|9 years ago|reply
[+] [-] unicornporn|9 years ago|reply
[+] [-] 8ig8|9 years ago|reply
https://pinboard.in/upgrade/
[+] [-] nikisweeting|9 years ago|reply
[+] [-] joshstrange|9 years ago|reply
[+] [-] snackai|9 years ago|reply
[+] [-] djhworld|9 years ago|reply
Could you add an option to either add tagging, or separate the tagged items into folders?
e.g. "programming/", "docker/" etc, I often find myself digging through my Pocket archive trying to find that one article I found 6 months ago and it gets incredibly annoying
[+] [-] nikisweeting|9 years ago|reply
[+] [-] anc84|9 years ago|reply
Great project!
[+] [-] rcarmo|9 years ago|reply
I have half a mind to fork this and add something like https://github.com/internetarchive/warcprox, or at the very least walk through the generated HTML and brute-force inline all assets as a first pass :)
[+] [-] motdiem|9 years ago|reply
[+] [-] frik|9 years ago|reply
[+] [-] arkenflame|9 years ago|reply
[+] [-] fiatjaf|9 years ago|reply
Strings!
[+] [-] nikisweeting|9 years ago|reply
[+] [-] bicubic|9 years ago|reply
[+] [-] nikisweeting|9 years ago|reply
[+] [-] ents|9 years ago|reply
[+] [-] nikisweeting|9 years ago|reply
Just one line of regex changes probably: https://github.com/pirate/pocket-archive-stream/blob/master/...
[+] [-] rcarmo|9 years ago|reply
[+] [-] burnbabyburn|9 years ago|reply
[+] [-] anotheryou|9 years ago|reply
[+] [-] nikisweeting|9 years ago|reply