top | item 8911067

Web scraping with Ruby

55 points| hecticjeff | 11 years ago |chrismytton.uk | reply

31 comments

order
[+] boie0025|11 years ago|reply
I had to write scrapers in Ruby for a very large application that scraped all kinds of government information from various states. We found (after a lot of pain working with very procedural scrapers) that a modified producer/consumer pattern worked well. We found that making classes for the producers (they were classes that described each page to be scraped, with methods that matched the modeled data) allowed for easy maintenance. We then created consumers that could be passed any of the page specific producer classes, and knew how to persist the scraped data.

Once I had a good pattern in place I could easily create subclasses of the data type I was trying to scrape, basically pointing each of the modeled data methods to an xpath that was specific to that page.

[+] psynapse|11 years ago|reply
I lead a team that works on several hundred bots scraping at high frequency. We also separate the problem of site structure and payload parsing, though slightly differently.

We have a low frequency discovery process that delves the site to create a representative meta-data structure. This is then read by a high frequency process to create a list of URLs to fetch and parse each time.

The behaviour can then be modified and/or work divided between processes by using command line arguments that cause filtering of the meta-data.

[+] troels|11 years ago|reply
If I understand you right, you have a lot of different data types to scrape, so essentially you have a sub-program for each data type and when a page is downloaded, you let each of these have a go at the page and emit content if it finds any? Or did I completely miss the point?
[+] adanto6840|11 years ago|reply
We do something very similar & I'd love to get in touch if you'd be interested in discussing further. My email is in my profile if you'd be willing to reach out.
[+] Doctor_Fegg|11 years ago|reply
I'd suggest going with mechanize from the off - not just, as the article says, "[when] the site you’re scraping requires you to login first, for those instances I recommend looking into mechanize".

Mechanize allows you to write clean, efficient scraper code without all the boilerplate. It's the nicest scraping solution I've yet encountered.

[+] hecticjeff|11 years ago|reply
I agree that mechanize is an excellent scraping solution, but for something really basic like this where we're not clicking links or submitting forms it seemed like a bit of an overkill :)
[+] wnm|11 years ago|reply
I recommend having a look at capybara [0]. It is build on top of nokogiri, and is actually a tool to write acceptence tests. But it can also be used for web scraping: you can open websites, click on links, fill in forms, find elements on a page (via xpath or css), get their values, etc... I prefer it over nokogiri because of its nice DSL and good documentation [1]. It also can execute javascript, which sometimes is handy for scraping.

I've spend a lot of time working on web scrapers for two of my projects, http://themescroller.com (dead) and http://www.remoteworknewsletter.com, and I think the holy grail is to build a rails app around your scraper. You can write your scrapers as libs, and then make them executable as rake tasks, or even cronjobs. And because its a rails app you can save all scraped data as actual models and have them persisted in a database. With rails its also super easy to build an api around your data, or build a quick backend for it via rails scaffolds.

[0] https://github.com/jnicklas/capybara [1] http://www.rubydoc.info/github/jnicklas/capybara/

[+] joshmn|11 years ago|reply
I always see people using something like HTTParty or open-uri for pulling down the page. My preferred (by far) is typhoeus, as it supports parallel requests and wraps around libcurl.

https://github.com/typhoeus/typhoeus

[+] jstoiko|11 years ago|reply
I'd suggest taking a look at Scrapy (http://scrapy.org). It is built on top of Twisted (asynchronous) and uses xPath which makes your "scraping" code a lot more readable.
[+] klibertp|11 years ago|reply
Scrapy is written in Python. This looks like a Ruby focused article. It's even written in the title, no need to actually go and read it. I'd say your suggestion is simply off-topic here.

As for Scrapy itself, it's a big framework, written on top of even bigger framework which is probably better described as a platform at this point. I've used Scrapy in a couple of projects and I also worked with Twisted before, which made things significantly easier for me, and it still was quite a bit of a hassle to set things up. IIRC configuring a pipeline for saving images to disk with their original names was kind of a nightmare. It does perform extremely well and scales to insane workloads, but I would never use it for simple scrapper for a single site. For those requests+lxml work extremely well.

[+] cheald|11 years ago|reply
Nokogiri can use xpath, as well, FWIW. The article's example could be a lot more terse.
[+] pkmishra|11 years ago|reply
Scraping is generally easy but challenges come when you are scraping large amount of unstructured data and how well you respond to page changes pro-actively. Scrapy is very good. I couldn't find similar tool in Ruby though.
[+] k__|11 years ago|reply
Can anyone list some good resources about scraping, with gotchas etc.?
[+] forlorn|11 years ago|reply
My recipe is to use Typhoeus (https://github.com/typhoeus/typhoeus) + Nokogiri. I have tried lots of different options including EventMachine with em-http-request and reactor loop and concurrent-ruby (both a re very poorly documented)

Typhoeus has a built-in concurrency mechanism with callbacks with specified number of concurrent http requests. You just create a hydra object, create the first request object with URL and a callback (you have to check errors like 404 yourself) where you extract another URLs from the page and push them to hydra again with the same on another callback.

[+] programminggeek|11 years ago|reply
Why not just use like watir or selenium?
[+] bradleyland|11 years ago|reply
Because then you're running an entire browser when all you really need is an HTTP library and a parser.
[+] richardpetersen|11 years ago|reply
How do you get the script to save the json file?
[+] cheald|11 years ago|reply
Or, you can write it from within Ruby:

    open("out.json", "w") {|f| f.puts JSON.dump(showings) }
[+] hecticjeff|11 years ago|reply
You can redirect the output of the script to a json file, so in this case something like:

$ ruby scraper.rb > showings.json

[+] mychaelangelo|11 years ago|reply
thanks for sharing this - great scraping intro for us newbies (I'm new to ruby and ROR).