top | item 7904491

Artoo, the client-side scraping companion

83 points| jacomyal | 11 years ago |medialab.github.io | reply

28 comments

order
[+] zak_mc_kracken|11 years ago|reply
Still not convinced by the reasons offered for client-side scraping. If I'm on my browser, I'm not interested in consuming JSON.

Scraping is really something that's better done in the back end, and today, there are a lot of libraries that let you access web sites from Java and run all the Javascript you need in order to display the page properly.

[+] rektide|11 years ago|reply
To each their own. I'm not interested in systematic scraping. I just want to take back, take home the web experience I've had, and be able to digest and work with it latter. The things that I want to work with are the sights and experiences I've had. Client side is perfect.

Second, if I was trying to scrape, I'd rather do scraping with WebDriver than anything else, and injecting some client side scraping tools and using WebDriver as a driver, not a driver/scraper sounds remarkably better.

I see no reason to ever not use a browser to consume html content.

[+] rouxrc|11 years ago|reply
Skeptical at first as well coming from the good ol curl/grep/sed backend scraping world, I changed my mind considering authentication issues and instructions saving: no more need to try and auth on complex websites via phantom without knowing what actually happens, I can just log in and see in my browser what I actually wanna scrape and still rerun it later as a script.

And I just loooove listening to artoo beep over and over ;)

[+] Yomguithereal|11 years ago|reply
Backend and frontend scraping just don't attend the same needs. Running backend monsters to scrape small to medium amount of data only once is such a drag when frontend scraping can take less than half an hour to perform the same task. Plus you can see the results of your code live while browsing the DOM comfortably. Finally, nobody prevents you from using artoo backend when you execute javascript.
[+] jacomyal|11 years ago|reply
Basically, it makes scraping accessible to almost anyone who can use a browser and write some CSS selectors.
[+] brucehart|11 years ago|reply
Great work! I really like this! I typically use the JavaScript console bookmarklet for tasks like this, but it is not specifically designed for scraping. I would love to see an option that would allow Artoo commands to be packaged into a PhantomJS script. Developers could use Artoo manually to figure out what elements should be targeted and then the PhantomJS script to run it in an automated fashion.
[+] Yomguithereal|11 years ago|reply
This would indeed be nice and this is precisely what we intend to code next.
[+] ghkbrew|11 years ago|reply
What advantages does this have over Phantom.js[1] ?

[1] http://phantomjs.org

[+] jacomyal|11 years ago|reply
Both are really different. Phantom.js is a headless browser while artoo is a tool to easily scrape data from website.

But combining both would be nice to make it possible to automatize scrapers that have been developed quickly directly in the browser with artoo.

[+] fiatjaf|11 years ago|reply
This is awesome. I've been dreaming about this for weeks.

I don't know if it is possible, but could this run as a Chrome Extension, in a background script, loading various pages, executing code on then and keep going, storing the data at the extension's localStorage?

It could also store the code of the scrapers, for reusing.

[+] fiatjaf|11 years ago|reply
Well, I see you already have almost all I suggested. Now I would want something to make the ajaxSpider render the pages using the browser engine, instead of just getting pure HTML.
[+] nnnnni|11 years ago|reply
I would like to see something that helps create useful, specific scrapers for languages like Ruby and Python.

It's annoying to have to run scripts multiple times, tweaking it after each run to get exactly what you need. It's a waste of time...

[+] the_cat_kittles|11 years ago|reply

  >> ipython

  [In 1]: from pyquery import PyQuery as pq
  [In 2]: pq("http://www.foo.com")("<some jquery selectors>")
(inspect output, repeat till right)

... or do it with requests + lxml.etree, or whatever you want

when you have what you need, copy and paste into a file

[+] benmmurphy|11 years ago|reply
this jquery injection looks kind of dangerous. Looks like code from code.jquery.com is loaded into any page. Say I go to https://secretsquirrel.com and they have been very careful to only load javascript from their own domain but now it can also load malicious javascript from https://code.jquery.com.

it also disable CSP. i'm not exactly sure how the extension works. maybe it is turned on/off on per tab basis and defaults to off which would be quite safe. but if it defaults to on then it can be kind of risky.

[+] Yomguithereal|11 years ago|reply
jquery is injected carefully by artoo so it does not break anything on the host page. However, CSP override is not default on artoo and you have to install the chrome extension to perform this. But this extension has solely to be activated when scraping and only developers should use them while knowing its effects.
[+] thebiglebrewski|11 years ago|reply
Yeaaaah you might wanna rename that. I think the other Artoo already has enough traction and this will just confuse people.
[+] kej|11 years ago|reply
They seem different enough that anyone interested in these would be able to tell them apart.
[+] nkozyra|11 years ago|reply
I'd also argue that one makes more sense in terms of naming.
[+] notastartup|11 years ago|reply
This is great for simple, quick job. However, you can do only so much in a local browser itself.

I basically built a bookmarklet that let's you define the actions locally on your browser, and then run the scrapes in your own box, essentially allowing unmetered scraping without charging per page.

http://scrape.ly

[+] sogen|11 years ago|reply
closed beta?