top | item 7305329

Show HN: Pagesnap – Take monthly screenshots of any webpage automatically

62 points| zrail | 12 years ago |pagesnap.io | reply

48 comments

order
[+] derwiki|12 years ago|reply
I built something very similar last year and still reference it all the time:

https://www.dailysitesnap.com

There's a payment form but I never hooked anything up -- it's all free. You can sign up to have a screenshot of any URL emailed to you on a daily basis, or opt out of emails and just have it archived on the site. I've been collecting daily screenshots of ~20 public web sites for the last few months:

https://www.dailysitesnap.com/public

EDIT: fixed embarrassing typo, good morning everyone!

[+] zrail|12 years ago|reply
This was inspired by a short discussion on Twitter between mperham and patio11 last weekend.[1] The basic app works great right now but I haven't put the billing code in quite yet. If you're interested, sign up for the list[2] and I'll send you an invite (with a discount) when it's ready to go.

https://twitter.com/mperham/status/436976217443422208

http://www.pagesnap.io/list-signup

[+] soneca|12 years ago|reply
You are the Master of Modern Payments and the billing code is the only thing it lacks??

Just kidding! ;) Smart move and good luck!

[+] mountaineer|12 years ago|reply
I saw that same conversation and went looking around thinking there must be something for this. Cool to see it come to life so quickly, nice job.
[+] askedrelic|12 years ago|reply
Love the simple site and focus. I've been working on a similar project focusing more on visual diffing, but haven't been able to launch yet. Still building the server infrastructure, product focus, and working out the billing.

Thanks for sharing!

[+] timjahn|12 years ago|reply
Along these lines, I've always wanted a repo specific wayback machine that would generate a state of your site for every commit, so you could browse what your site looked like at every point along the way as you built your repo.

So I could browse back to last week and see what my site looked like, or to my 3rd commit and see what my site looked like then.

[+] octo_t|12 years ago|reply

  for commit in `git log --format="%H"`
  do
    git checkout $commit && build_my_site.sh && run_my_site.sh
    phantomjs screenshot.js $commit
  done
[+] thehodge|12 years ago|reply
I was thinking about that the other day, that would be awesome something that builds it at every step and converts it to an animated gif
[+] logicallee|12 years ago|reply
what we need is some nielsen families that have full videos of their everyday surfing - just whatever they're doing, 24/7, saved for posterity. Imagine seeing how someone used the 'net in 1994 and 2004. It would be an interesting museum, to say the least.
[+] pastylegs|12 years ago|reply
- Adding visual diffs would be very cool.

- Some sort of visual timeline would also be great, i.e. the ability to flip through screenshots with a javascript slider or something

- Export to gifs of video might be useful

- The ability to tag information to certain screenshots would be useful for noting changes and milestones (like in Google Analytics)

[+] minouye|12 years ago|reply
I recently started a project that takes screenshots of top news sites every hour:

http://newsabovethefold.com

It's really fascinating to look at once you've collected a decent number of screenshots to animate through:

http://newsabovethefold.com/animate/1 (CNN.com every hour for the last two months--will load slowly!)

I'd definitely be interested in using this service if I wasn't already paying for Snapito.

[+] andygcook|12 years ago|reply
I actually had this idea a few years ago but never got around to building it, so I'm glad you're making it happen.

At my previous startup I hacked this manually by taking a screen grab with Evernote once a month.

I've heard Alexis Ohanian mention he is thankful for having the foresight to take screen shots or early reddit builds too.

I'm sure a lot of startups would find this useful for capturing the journey of the product and then later nostalgia.

[+] ecesena|12 years ago|reply
Question. Is the number of urls such a huge burden?

At Theneeds.com we manage about 4k websites (planning to grow to 10k) so the prices of all these services are totally out of our budget, and we ended up with an in-house solution to take screenshots.

I would see myself paying more depending on the total number of screenshots and/or the size of the screenshots, but I really don't get the difference between 1 or 1k urls.

[+] zrail|12 years ago|reply
It's not necessarily a huge burden, but if someone wants to run 10K URLs through this thing on a daily basis we'd need to have a chat. As of today it's not ready to scale that big.

Of course, if someone did want to run that many through I'm sure we could talk about volume pricing, given sufficient lead time to scale the app.

[+] rschmitty|12 years ago|reply
Cool feature to add would be mobile and tablet versions
[+] zrail|12 years ago|reply
I'll add it to the list. Would customizing the width and user agent allow you to do what you want?
[+] 650REDHAIR|12 years ago|reply
Are the screenshots being grabbed by Pagesnap or the website's end? Do I have to include some sort of JS snippet? Could you use this to monitor the competition's landing page?
[+] zrail|12 years ago|reply
Screenshots are grabbed from the server. At the moment there's nothing you have to include on your end.
[+] NikolaTesla|12 years ago|reply
I think the real benefit of sites like this are accountability and historic research. Having the ability to see what has been omitted or taken down is far more interesting.
[+] ansimionescu|12 years ago|reply
Reading some old Steve Yegge posts (e.g. https://sites.google.com/site/steveyegge2/five-essential-pho...) I started wondering about the feasibility of a web service that would automatically change links on articles older than say, 5 years, to the equivalent Web Archive/Google cache/other links.
[+] toomuchtodo|12 years ago|reply
Whenever I'm reading something on Reddit/HackerNews/etc that I think is going to be relevant later, I hit my Archive.org bookmarklet:

javascript:void(open('//web.archive.org/save/'+encodeURI(document.location)))

Which immediately saves the page to the Internet Archive. I haven't had time to look into if the Internet Archive has an API you can issue a URL to and it will fetch the content in the same way as above.

[+] spindritf|12 years ago|reply
Very cool and I think the pricing is just right. Why are there 3 start now buttons all together though? Does each correspond to the plan you'd like to buy?
[+] zrail|12 years ago|reply
Thanks! Yeah, that's the idea. Sounds like the pricing table could use a little work.
[+] gruseom|12 years ago|reply
That's a really good name.
[+] digitalsushi|12 years ago|reply
This would solve a large problem I have, if it could be taught how to click on a few well-named #ids my dynamic page has. It's not obvious whether it can do this.
[+] mperham|12 years ago|reply
I love everything about this. Good job so far!
[+] zrail|12 years ago|reply
It even uses sidekiq under the hood! :)
[+] kumarski|12 years ago|reply
but good job building it. Kudos to your creativity.