top | item 5892779

Python Headless Web Browser Scraping on Amazon Linux

102 points| steven5158 | 12 years ago |fruchterco.com | reply

39 comments

order
[+] fauigerzigerk|12 years ago|reply
PhantomJS is brilliant, but Selenium is a questionable choice for this task. For some reason, the creators of Selenium have decided that passing HTTP status codes back through the API is and always will be outside the scope of their project. So if you request a page and it returns 404 you have no way to find out (other than using crude heuristics). This makes Selenium completely unusable for anything I would have used it for.

Fortunately you can do it by using phantomjs directly instead of going through the Selenium WebDriver API. Maybe one day the phantomjs WebDriver API implementation (ghostdriver) will extend the API to pass HTTP status information back to the caller. Until then, this API is unusable (at least for me).

[+] nirvdrum|12 years ago|reply
Well, I think the matter is a bit more complicated than that. When dealing with a full browser, you fetch a lot of resources. The status code for the first page fetch may be easily obtained, but your API gets very wonky as soon as you want to get status codes for all linked resources. Even if you managed that, any Ajax requests would complicate things, especially if they have deferred loading. And then you have WebSockets.

There are tools, such as BrowserMob Proxy, far better suited for monitoring HTTP traffic. And they'll get you all the headers. You can even capture to HAR so you measure performance.

[+] swinglock|12 years ago|reply
Aren't you stuck with JavaScript then? Sure, PhanthomJS is awesome, but Python is even in the title, so it's not just a side note.
[+] slaxo|12 years ago|reply
For anyone using PhantomJS I'd recommend checking out CasperJS (http://casperjs.org/) . It adds some nice features to PhantomJS and takes out a lot of the pain points
[+] diminoten|12 years ago|reply
I find it preferable to determine the requests that jQuery is making and perform them myself to extract the necessary data, rather than load up a whole browser just to do the same thing.

Selenium is terrible, performance wise, and requires a significant investment in environment in order to work reliably. I try to avoid it except when I absolutely cannot.

[+] ArbitraryCrow|12 years ago|reply
I wound up doing this myself, after spending an undue amount of time struggling with a morass of insanely written Javascript. Fiddler proved indispensable for observing the actual interaction with the web server.
[+] brechin|12 years ago|reply
If you're writing Python and need to do something like this, you could try using Phantompy, a Python port of PhantomJS: https://github.com/niwibe/phantompy

It's still "in an early stage of development" but it's on my list of libraries to keep an eye on for when I have time to tackle the JS-heavy sites of the world.

[+] spikels|12 years ago|reply
For scraping phantomjs or casperjs is the best way to go but you will have to use some JavaScript [1]. Both give you access to everything a WebKit browser user does with either a Node-style callback syntax (phantomjs) or a procedural/promises-style syntax (casperjs). Easy to setup, simple to use and fast enough for scraping but only WebKit (for now).

For testing on browsers other than WebKit (or vendor specific WebKit edge cases) use Selenium. Harder to setup, more complex, probably faster (still slow for testing) but not limited to WebKit.

[1] Sorry folks but some JavaScript is required to programmatically interacting with the web - also need some HTML and CSS.

[+] xfour|12 years ago|reply
One more thing, has anyone used BeautifulSoup for forever? Is the project still active? I mean the website is cute and all, but I find pyquery ( Also based on lxml) so much easier with parsing the scraped data.
[+] brechin|12 years ago|reply
I'd consider it still active, since it was updated on 2013-06-07: https://pypi.python.org/pypi/beautifulsoup4

I prefer using lxml myself, since I like using XPath queries, but bs4 sometimes parses broken HTML better than any of the provided lxml parsers do.

[+] ianhawes|12 years ago|reply
Something to consider is that the trend the past year has been to use headless browsers over BeautifulSoup, cURL, etc.. because headless browsers are harder to detect by anti-scraping systems and can interpret JavaScript.
[+] takluyver|12 years ago|reply
BS4, which is still actively developed, got out of the parser game - it can now use lxml (fast) or html5lib (highly tolerant) to parse the HTML. It's kept the convenient interface to dig into the DOM, and it's kept the UnicodeDammit encoding detection system.
[+] 616c|12 years ago|reply
I recently tried to get back into Selenium for a work-related project and, despite its frustrations, it is one my favorite open source gems I found in the last several years. When showing it uninitiated web devs their heads almost exploded from joy and amazement. Your setup with Selenium intrigued me since the pain point for me has become how difficult it is to maneuver some browsers with Selenium IDE to throw together ideas, if that is even encouraged anymore.
[+] phaer|12 years ago|reply
You are installing some devel-packages, but i don't see anything compiling? Does the selenium installation build native extensions? Then the commands should probably the other way round. Or is phantomjs compiling something on the first run?

Minor nitpick: I don't think it is a good idea to copy a binary directly to /usr/bin, without a package manager. You could just put it into /opt and symlink to /usr/(local/)bin.

[+] kawsper|12 years ago|reply
The file that he is fetching ( phantomjs-1.9.1-linux-x86_64.tar.bz2 ) is the executables for his platform, with some examples on usage and a readme.
[+] j-kidd|12 years ago|reply
Off topic: it is perfectly fine to install things like PyQt / PySide on a headless server. I suppose the problem is because the distro doesn't provide these packages?

Also, PhantomJS works fine in this case because the binary in the tarball is statically compiled. You can find a whole lot of qt stuffs inside PhantomJS source repository. There ain't no such thing as "truly headless".

[+] techaddict009|12 years ago|reply
Wow was searching something similar. Actually was trying to build a app which scraps data from movie ticket booking sites and provides data via SMS to user that whether tickets are still available or not. Because everyone doesn't have access to internet in India yet.

@Steven5158 thanks for the share.

If anyone here wants help in building SMS apps do contact me.

[+] keypusher|12 years ago|reply
We do quite a bit of web scraping / parsing on headless servers with Selenium. What we did was just install some X packages and run VNC server on the headless clients with Firefox. Cool thing about that is you can then go watch the scripts executing if you connect to the VNC session and take a screenshot on failure, etc.
[+] Shakahs|12 years ago|reply
Brilliant! I've been using Xvfb for headless operation, didn't even consider using VNC.
[+] JimmaDaRustla|12 years ago|reply
I am under the assumption the python-requests would have the same issue - it does not render the page, it only retrieves the original page response.

Very, very good to know when diving into scraping.