My hunch is that 90-99% of all Jeopardy questions can be answered with information in Wikipedia/Wiktionary, properly understood.
So I'd start with Wikipedia: ~30GB uncompressed full article text. Break it into chunks; canonicalize phrasings to be more declarative, and include synonyms/hypernym/hyponym phrasings (via something like WordNet), so that various 'cluesy' ways of saying things still bring up the same candidate answers.
Because it's free and compact and well-structured, throw in Freebase, too.
Jeopardy goes back to certain topics/answers again and again. So I'd scrape the full 200K+ clue "J!Archive", and use it as both source and testing material (though of course not testing the system on rounds in its memory).
And I'd add special interpretation rules for commonly-recurring category types: X-letter words, before-and-after, quasi-multiple-choice, words-in-quotes.
I think such a system might get half or more of the questions in a typical round correct, and in a matter of seconds, even on a single machine.
I'm not certain, but I'd bet Watson was explicitly not allowed to use something like J!Archive as training data. For one, the questions used in the Jeopardy games it played were drawn randomly from previous questions. More importantly, though, learning a stilted, domain specific language model to play Jeopardy isn't anywhere near as challenging, impressive, or worth pursuing than generating something that includes Jeopardy as a subset of its capacity.
Now Watson was tuned on Jeopardy questions. I'm sure the learning processes were adjusted in light of mistakes made on the Jeopardy corpus, but interpolation is far less big a deal than a full language model.
For what it's worth, I scraped J-Archive.com and wrote a couple of articles for Slate Magazine about what I found. More for the purpose of learning about Jeopardy than learning how to win.
Search optimization: No, this team focused on making IBM Watson optimized to answer in 3 seconds or less. We can accept a slower response, so we can skip this.
That makes me laugh. I'd guess that search optimization effort has a power law response here. 3 seconds is extraordinary, 1 minute is tricky, 10 minutes is possible after some solid effort, 3 days-heat death of universe is what you get without optimization.
Not saying you actually ignore it. It's built into those libraries they casually throw around. Just thought the wording was funny.
I somehow cannot give up daydreaming wistfully about a personal CM-5. From a previous discussion on HN it seems it still is going to be an expensive thing to build as a toy project. Particularly because of the hyper-cube inter-connection. Not sure if the source code for star-Lisp is available. But I think an emulator lives on at Sourceforge.
This article doesn't really tell you how to build a "Watson Jr." as they call it. It just tells you to use OpenNLP and UIMA (which is unnecessary, but understandable why its advocated since IBM created it).
I was kind of hoping that there would be a deeper dive into how the data was being stored and retrieved. I'm also interested in the Machine Learning side of it. They don't really give any hints at that as well.
This is one of the great moments in the history of humanity, right up there with the first self-powered flying machine. North Carolina got a licence plate: "FIRST IN FLIGHT". Someone is going to get the credit for open ended question answering machine shortly. Who gets it?
The race is on, whoever creates the first reasonably good question-answering machine for demonstration will get their names etched into the sands of time for the next ten thousand years. Get to it!
This industry has the chance to be bigger than Google and Microsoft combined. Every person on the Earth will demand one of these. Those who won't have one will be at a remarkable disadvantage. This is going to turn into a trillion dollar industry.
open-ended question answering systems have been around for many years. there are stacks of research papers written about them, watson is an improvement.
[+] [-] gojomo|15 years ago|reply
So I'd start with Wikipedia: ~30GB uncompressed full article text. Break it into chunks; canonicalize phrasings to be more declarative, and include synonyms/hypernym/hyponym phrasings (via something like WordNet), so that various 'cluesy' ways of saying things still bring up the same candidate answers.
Because it's free and compact and well-structured, throw in Freebase, too.
Jeopardy goes back to certain topics/answers again and again. So I'd scrape the full 200K+ clue "J!Archive", and use it as both source and testing material (though of course not testing the system on rounds in its memory).
And I'd add special interpretation rules for commonly-recurring category types: X-letter words, before-and-after, quasi-multiple-choice, words-in-quotes.
I think such a system might get half or more of the questions in a typical round correct, and in a matter of seconds, even on a single machine.
[+] [-] aristus|15 years ago|reply
[+] [-] tel|15 years ago|reply
Now Watson was tuned on Jeopardy questions. I'm sure the learning processes were adjusted in light of mistakes made on the Jeopardy corpus, but interpolation is far less big a deal than a full language model.
[+] [-] jsvine|15 years ago|reply
- The first piece: http://www.slate.com/id/2284678/
- The follow-up, in which I answer reader questions: http://www.slate.com/id/2287705/
[+] [-] tel|15 years ago|reply
That makes me laugh. I'd guess that search optimization effort has a power law response here. 3 seconds is extraordinary, 1 minute is tricky, 10 minutes is possible after some solid effort, 3 days-heat death of universe is what you get without optimization.
Not saying you actually ignore it. It's built into those libraries they casually throw around. Just thought the wording was funny.
[+] [-] srean|15 years ago|reply
Edit 1: Here it is http://sourceforge.net/projects/starsim/
Edit 2: Just doubled checked, the Sourceforge repository has no code !! But I found it here http://examples.franz.com/category/Application/ParallelProgr... @dhess Thanks a lot for that link. I just ordered a copy :)
[+] [-] dhess|15 years ago|reply
http://www.amazon.com/Paralation-Model-Architecture-Independ...
[+] [-] charlesju|15 years ago|reply
[+] [-] kirpekar|15 years ago|reply
[+] [-] rudiger|15 years ago|reply
[+] [-] RK|15 years ago|reply
[+] [-] moomba|15 years ago|reply
I was kind of hoping that there would be a deeper dive into how the data was being stored and retrieved. I'm also interested in the Machine Learning side of it. They don't really give any hints at that as well.
[+] [-] nl|15 years ago|reply
You are right that UIMA isn't needed, but some kind of tool for importing unstructured or semi-structured data is required.
[+] [-] joakin|15 years ago|reply
Why man? Why? If you want to track clicks on links with javascript, dont trigger the ajax call when I do click in :not(a) ... -_-'
[+] [-] maeon3|15 years ago|reply
The race is on, whoever creates the first reasonably good question-answering machine for demonstration will get their names etched into the sands of time for the next ten thousand years. Get to it!
This industry has the chance to be bigger than Google and Microsoft combined. Every person on the Earth will demand one of these. Those who won't have one will be at a remarkable disadvantage. This is going to turn into a trillion dollar industry.
[+] [-] dailystatusrpt|15 years ago|reply