Ask PG: HN Ngram Viewer?
8 points| zissou | 12 years ago
I work in an academic lab where I'm one of the developers of a system that generates ngram viewers from large corpuses of text, which we call "Bookworms". Here are a few Bookworms we've created:
arXiv scientific publications: http://bookworm.culturomics.org/arxiv/
US Congress legislation: http://bookworm.culturomics.org/congress/
Open Library books: http://bookworm.culturomics.org/OL/
Chronicling America historical newspapers: http://bookworm.culturomics.org/ChronAm/
Social Science Research Network research paper abstracts: http://bookworm.culturomics.org/ssrn/
We have more Bookworms in the pipeline, including historical legislation in the UK and a massive corpus of texts (70MM+ documents) from the National Library of Australia (Trove) spanning multiple centuries. A new GUI for all our Bookworms will also be rolling out shortly. (Preview: http://bookworm.culturomics.org/new_gui_teaser.png).
In my opinion, HN be an awesome candidate for an ngram viewer because there are so many subsets of topics that come/go/stay here, such as the frequency of discussions about web technologies, programming languages, companies/services, the NSA, etc.
If this is something the HN admins would be interested in, I'd be happy to put it together. If a privacy agreement is desired before passing off any bulk data, that is not a problem as we've gone this route before, albeit only for private ngram viewers we've created for companies, like the NYT, to use internally.
kogir|12 years ago
I'm working on a more comprehensive first-party API for HN, and plan to implement the following, in this order:
Sadly, I can't commit to any firm timeline for future progress right now, but know that I'm working on it :)-- Edit: Remove link to broken data file. Fixing it up tomorrow.
zissou|12 years ago
However my question still remains with regards to historic posts/comments. The historic aspect is really the import element here. Generally speaking, building an ngram viewer requires a collection of texts over time, with each text having some kind of metadata that is categorical, boolean, datetime, or numeric. Categorical data can always be made of numeric data by creating bins -- i.e. posts by people with karma or a ranking of 1-50, 51-150, 151-300, etc at the time the comment/post was created. Datetimes can also be made into useful categorical variables for an ngram viewer such as day of the week (to spot weekly seasonality trends) or day of the year (annual seasonality trends).
If I was allowed, I would be willing to write a scraper/crawler to discover as many historic threads (since: threads -> comments) as possible using HNSearch, but this could take a long time depending on rate limits and/or be subject to unknown biases within my discovery method. I'm sure you can understand why a "top-down" approach like a database dump would make for a much higher quality corpus than attempting the "bottom-up" approach of a crawler. I have no idea if a "database dump of everything" is even feasible as I don't know anything about the HN's backend infrastructure. However, if it is feasible, then I'm certain that I can work with whatever would be available. Adding structure to unstructured data is my bread and butter.
I really think this would be a very cool tool that a lot of people would enjoy, so I'm willing to do what is needed on my end to help make it work. After all, I'd be on the clock while working on this rather than just a hobby project, so the incentives are definitely aligned on my end.
If you want to discuss anything in private, I can be reached at the following reversed address: moc{dot}liamg{at}yalkcin{dot}wehttam
kristianp|12 years ago