Still not convinced by the reasons offered for client-side scraping. If I'm on my browser, I'm not interested in consuming JSON.
Scraping is really something that's better done in the back end, and today, there are a lot of libraries that let you access web sites from Java and run all the Javascript you need in order to display the page properly.
To each their own. I'm not interested in systematic scraping. I just want to take back, take home the web experience I've had, and be able to digest and work with it latter. The things that I want to work with are the sights and experiences I've had. Client side is perfect.
Second, if I was trying to scrape, I'd rather do scraping with WebDriver than anything else, and injecting some client side scraping tools and using WebDriver as a driver, not a driver/scraper sounds remarkably better.
I see no reason to ever not use a browser to consume html content.
Skeptical at first as well coming from the good ol curl/grep/sed backend scraping world, I changed my mind considering authentication issues and instructions saving: no more need to try and auth on complex websites via phantom without knowing what actually happens, I can just log in and see in my browser what I actually wanna scrape and still rerun it later as a script.
And I just loooove listening to artoo beep over and over ;)
Backend and frontend scraping just don't attend the same needs. Running backend monsters to scrape small to medium amount of data only once is such a drag when frontend scraping can take less than half an hour to perform the same task. Plus you can see the results of your code live while browsing the DOM comfortably. Finally, nobody prevents you from using artoo backend when you execute javascript.
Great work! I really like this! I typically use the JavaScript console bookmarklet for tasks like this, but it is not specifically designed for scraping. I would love to see an option that would allow Artoo commands to be packaged into a PhantomJS script. Developers could use Artoo manually to figure out what elements should be targeted and then the PhantomJS script to run it in an automated fashion.
This is awesome. I've been dreaming about this for weeks.
I don't know if it is possible, but could this run as a Chrome Extension, in a background script, loading various pages, executing code on then and keep going, storing the data at the extension's localStorage?
It could also store the code of the scrapers, for reusing.
Well, I see you already have almost all I suggested. Now I would want something to make the ajaxSpider render the pages using the browser engine, instead of just getting pure HTML.
this jquery injection looks kind of dangerous. Looks like code from code.jquery.com is loaded into any page. Say I go to https://secretsquirrel.com and they have been very careful to only load javascript from their own domain but now it can also load malicious javascript from https://code.jquery.com.
it also disable CSP. i'm not exactly sure how the extension works. maybe it is turned on/off on per tab basis and defaults to off which would be quite safe. but if it defaults to on then it can be kind of risky.
jquery is injected carefully by artoo so it does not break anything on the host page. However, CSP override is not default on artoo and you have to install the chrome extension to perform this. But this extension has solely to be activated when scraping and only developers should use them while knowing its effects.
This is great for simple, quick job. However, you can do only so much in a local browser itself.
I basically built a bookmarklet that let's you define the actions locally on your browser, and then run the scrapes in your own box, essentially allowing unmetered scraping without charging per page.
[+] [-] EamonLeonard|11 years ago|reply
[+] [-] zak_mc_kracken|11 years ago|reply
Scraping is really something that's better done in the back end, and today, there are a lot of libraries that let you access web sites from Java and run all the Javascript you need in order to display the page properly.
[+] [-] rektide|11 years ago|reply
Second, if I was trying to scrape, I'd rather do scraping with WebDriver than anything else, and injecting some client side scraping tools and using WebDriver as a driver, not a driver/scraper sounds remarkably better.
I see no reason to ever not use a browser to consume html content.
[+] [-] rouxrc|11 years ago|reply
And I just loooove listening to artoo beep over and over ;)
[+] [-] Yomguithereal|11 years ago|reply
[+] [-] jacomyal|11 years ago|reply
[+] [-] brucehart|11 years ago|reply
[+] [-] Yomguithereal|11 years ago|reply
[+] [-] ghkbrew|11 years ago|reply
[1] http://phantomjs.org
[+] [-] jacomyal|11 years ago|reply
But combining both would be nice to make it possible to automatize scrapers that have been developed quickly directly in the browser with artoo.
[+] [-] fiatjaf|11 years ago|reply
I don't know if it is possible, but could this run as a Chrome Extension, in a background script, loading various pages, executing code on then and keep going, storing the data at the extension's localStorage?
It could also store the code of the scrapers, for reusing.
[+] [-] fiatjaf|11 years ago|reply
[+] [-] nnnnni|11 years ago|reply
It's annoying to have to run scripts multiple times, tweaking it after each run to get exactly what you need. It's a waste of time...
[+] [-] the_cat_kittles|11 years ago|reply
... or do it with requests + lxml.etree, or whatever you want
when you have what you need, copy and paste into a file
[+] [-] dfischer|11 years ago|reply
Might be better to use another name?
[+] [-] benmmurphy|11 years ago|reply
it also disable CSP. i'm not exactly sure how the extension works. maybe it is turned on/off on per tab basis and defaults to off which would be quite safe. but if it defaults to on then it can be kind of risky.
[+] [-] Yomguithereal|11 years ago|reply
[+] [-] thebiglebrewski|11 years ago|reply
[+] [-] kej|11 years ago|reply
[+] [-] nkozyra|11 years ago|reply
[+] [-] rektide|11 years ago|reply
[+] [-] notastartup|11 years ago|reply
I basically built a bookmarklet that let's you define the actions locally on your browser, and then run the scrapes in your own box, essentially allowing unmetered scraping without charging per page.
http://scrape.ly
[+] [-] sogen|11 years ago|reply