PhantomJS is brilliant, but Selenium is a questionable choice for this task. For some reason, the creators of Selenium have decided that passing HTTP status codes back through the API is and always will be outside the scope of their project. So if you request a page and it returns 404 you have no way to find out (other than using crude heuristics). This makes Selenium completely unusable for anything I would have used it for.
Fortunately you can do it by using phantomjs directly instead of going through the Selenium WebDriver API. Maybe one day the phantomjs WebDriver API implementation (ghostdriver) will extend the API to pass HTTP status information back to the caller. Until then, this API is unusable (at least for me).
Well, I think the matter is a bit more complicated than that. When dealing with a full browser, you fetch a lot of resources. The status code for the first page fetch may be easily obtained, but your API gets very wonky as soon as you want to get status codes for all linked resources. Even if you managed that, any Ajax requests would complicate things, especially if they have deferred loading. And then you have WebSockets.
There are tools, such as BrowserMob Proxy, far better suited for monitoring HTTP traffic. And they'll get you all the headers. You can even capture to HAR so you measure performance.
For anyone using PhantomJS I'd recommend checking out CasperJS (http://casperjs.org/) . It adds some nice features to PhantomJS and takes out a lot of the pain points
I find it preferable to determine the requests that jQuery is making and perform them myself to extract the necessary data, rather than load up a whole browser just to do the same thing.
Selenium is terrible, performance wise, and requires a significant investment in environment in order to work reliably. I try to avoid it except when I absolutely cannot.
I wound up doing this myself, after spending an undue amount of time struggling with a morass of insanely written Javascript. Fiddler proved indispensable for observing the actual interaction with the web server.
If you're writing Python and need to do something like this, you could try using Phantompy, a Python port of PhantomJS: https://github.com/niwibe/phantompy
It's still "in an early stage of development" but it's on my list of libraries to keep an eye on for when I have time to tackle the JS-heavy sites of the world.
For scraping phantomjs or casperjs is the best way to go but you will have to use some JavaScript [1]. Both give you access to everything a WebKit browser user does with either a Node-style callback syntax (phantomjs) or a procedural/promises-style syntax (casperjs). Easy to setup, simple to use and fast enough for scraping but only WebKit (for now).
For testing on browsers other than WebKit (or vendor specific WebKit edge cases) use Selenium. Harder to setup, more complex, probably faster (still slow for testing) but not limited to WebKit.
[1] Sorry folks but some JavaScript is required to programmatically interacting with the web - also need some HTML and CSS.
One more thing, has anyone used BeautifulSoup for forever? Is the project still active? I mean the website is cute and all, but I find pyquery ( Also based on lxml) so much easier with parsing the scraped data.
Something to consider is that the trend the past year has been to use headless browsers over BeautifulSoup, cURL, etc.. because headless browsers are harder to detect by anti-scraping systems and can interpret JavaScript.
BS4, which is still actively developed, got out of the parser game - it can now use lxml (fast) or html5lib (highly tolerant) to parse the HTML. It's kept the convenient interface to dig into the DOM, and it's kept the UnicodeDammit encoding detection system.
I recently tried to get back into Selenium for a work-related project and, despite its frustrations, it is one my favorite open source gems I found in the last several years. When showing it uninitiated web devs their heads almost exploded from joy and amazement. Your setup with Selenium intrigued me since the pain point for me has become how difficult it is to maneuver some browsers with Selenium IDE to throw together ideas, if that is even encouraged anymore.
You are installing some devel-packages, but i don't see anything compiling? Does the selenium installation build native extensions? Then the commands should probably the other way round. Or is phantomjs compiling something on the first run?
Minor nitpick: I don't think it is a good idea to copy a binary directly to /usr/bin, without a package manager. You could just put it into /opt and symlink to /usr/(local/)bin.
Off topic: it is perfectly fine to install things like PyQt / PySide on a headless server. I suppose the problem is because the distro doesn't provide these packages?
Also, PhantomJS works fine in this case because the binary in the tarball is statically compiled. You can find a whole lot of qt stuffs inside PhantomJS source repository. There ain't no such thing as "truly headless".
Wow was searching something similar. Actually was trying to build a app which scraps data from movie ticket booking sites and provides data via SMS to user that whether tickets are still available or not. Because everyone doesn't have access to internet in India yet.
@Steven5158 thanks for the share.
If anyone here wants help in building SMS apps do contact me.
We do quite a bit of web scraping / parsing on headless servers with Selenium. What we did was just install some X packages and run VNC server on the headless clients with Firefox. Cool thing about that is you can then go watch the scripts executing if you connect to the VNC session and take a screenshot on failure, etc.
[+] [-] fauigerzigerk|12 years ago|reply
Fortunately you can do it by using phantomjs directly instead of going through the Selenium WebDriver API. Maybe one day the phantomjs WebDriver API implementation (ghostdriver) will extend the API to pass HTTP status information back to the caller. Until then, this API is unusable (at least for me).
[+] [-] nirvdrum|12 years ago|reply
There are tools, such as BrowserMob Proxy, far better suited for monitoring HTTP traffic. And they'll get you all the headers. You can even capture to HAR so you measure performance.
[+] [-] ejk314|12 years ago|reply
See: http://voorloopnul.com/blog/a-python-proxy-in-less-than-100-...
[+] [-] swinglock|12 years ago|reply
[+] [-] slaxo|12 years ago|reply
[+] [-] diminoten|12 years ago|reply
Selenium is terrible, performance wise, and requires a significant investment in environment in order to work reliably. I try to avoid it except when I absolutely cannot.
[+] [-] ArbitraryCrow|12 years ago|reply
[+] [-] brechin|12 years ago|reply
It's still "in an early stage of development" but it's on my list of libraries to keep an eye on for when I have time to tackle the JS-heavy sites of the world.
[+] [-] spikels|12 years ago|reply
For testing on browsers other than WebKit (or vendor specific WebKit edge cases) use Selenium. Harder to setup, more complex, probably faster (still slow for testing) but not limited to WebKit.
[1] Sorry folks but some JavaScript is required to programmatically interacting with the web - also need some HTML and CSS.
[+] [-] xfour|12 years ago|reply
[+] [-] brechin|12 years ago|reply
I prefer using lxml myself, since I like using XPath queries, but bs4 sometimes parses broken HTML better than any of the provided lxml parsers do.
[+] [-] ianhawes|12 years ago|reply
[+] [-] takluyver|12 years ago|reply
[+] [-] 616c|12 years ago|reply
[+] [-] phaer|12 years ago|reply
Minor nitpick: I don't think it is a good idea to copy a binary directly to /usr/bin, without a package manager. You could just put it into /opt and symlink to /usr/(local/)bin.
[+] [-] kawsper|12 years ago|reply
[+] [-] j-kidd|12 years ago|reply
Also, PhantomJS works fine in this case because the binary in the tarball is statically compiled. You can find a whole lot of qt stuffs inside PhantomJS source repository. There ain't no such thing as "truly headless".
[+] [-] techaddict009|12 years ago|reply
@Steven5158 thanks for the share.
If anyone here wants help in building SMS apps do contact me.
[+] [-] keypusher|12 years ago|reply
[+] [-] Shakahs|12 years ago|reply
[+] [-] JimmaDaRustla|12 years ago|reply
Very, very good to know when diving into scraping.