mstoehr
|
11 years ago
|
on: Who Can Name the Bigger Number?
To say that the sequence is computable means that you must present a Turing machine, call it T, that will produce BB(n) in finitely many steps after taking input n. Our specific Turing machine T will have a fixed number of rules call it N rules, then by the definition of the Busy Beaver T(N+1) <= BB(N) = T(N) but BB(N+1) > BB(N) so T(N+1) does not compute BB(N+1). The flaw in your inductive proof is that you have shown there is a Turing machine that can compute BB(N) for each N, but you haven't shown that the same Turing machine can compute all numbers in the sequence.
mstoehr
|
14 years ago
|
on: Youth unemployment: The outsiders
Employers are probably unconvinced that younger applicants will be able to make them more profits. This is caused in part by the uselessness (for businesses) of the work students do in high-school and college--this is a group of people whose life has primarily consisted in consuming profits. Employers are also probably more risk-averse because the recession means that profit margins are tighter (if existent at all).
mstoehr
|
15 years ago
|
on: Whatever Happened to Voice Recognition?
Most commenters are focusing on relatively high level features of decoding speech. It is important to also be aware that there is still great debate about what are the acoustic correlates of linguistic events in speech. It seems that our words are composed of subunits (usually taken to be phones out of the IPA--but there's work on alternatives) but what exact acoustics correspond to the phones are is still unsettled: lots of debate and mediocre recognition performance
Undoubtedly there is much room for improvement on these higher-level features but computers are still well behind humans in large vocabulary isolated keyword spotting: this is a task where one word from a very large corpus of words is spoken and the human or computer has to guess what that word was. Computers do poorly relative to humans (particularly in noise), which suggests that many of the mistakes that computers make is in not being able to interpret the acoustics correctly.
mstoehr
|
16 years ago
|
on: Rest in Peas: The Unrecognized Death of Speech Recognition
I agree that its at the heart of it (and I'm presently writing a paper where I'm using articulatory-phonetic features rather than phonemes). Unfortunately, there is no large-vocabulary speech recognizer that uses articulatory phonetics (yet!). Every large scale speech recognizer and most small scale use phonemes and are trained using speech that has been transcribed into phonemes. There is almost no data that is annotated with articulatory phonetics (a problem I'm working on right now).
mstoehr
|
16 years ago
|
on: Rest in Peas: The Unrecognized Death of Speech Recognition
It's not actually clear what we hear at this point, there is evidence that we respond to something like frequencies, pitch, volume, and other things. But the jury is still out on the low-level signal processing that is occurring in the cochlea and the primary auditory cortex. What's happening further downstream in the brain is even less clear.
mstoehr
|
16 years ago
|
on: Rest in Peas: The Unrecognized Death of Speech Recognition
It is true that humans do use situational context. In the cases where semantics is important and complex for understanding an utterance a computer will fail even more because it won't get the semantics or the speech signal.
On the topic of dialog, this is arguably the area that speech recognition has gained in over the last nine years. Prior to 2001 there were not many usable dialog systems and (depending on your definition of "usable") there are many usable dialog systems deployed in call centers around the world.
Most call center dialog systems have a rudimentary system asking for people to repeat things when it doesn't understand. Although, if it asks more than once the callers tend to get very angry.
mstoehr
|
16 years ago
|
on: Rest in Peas: The Unrecognized Death of Speech Recognition
Actually most research effort in speech is more on the language side rather than the signal processing of the speech signal. So I think many people have a similar intuition as yourself.
Bear in mind though, that humans significantly outperform machines in tasks where isolated or streams of non-sense syllables are said: i.e.
"badagaka" is said and humans can pick out the syllables whereas computers can have a lot of difficulty (in noise in particular).
Computers start approaching human performance most when there is a lot of linguistic context to an utterance. So it appears that humans are doing something other than using semantics.
mstoehr
|
16 years ago
|
on: Rest in Peas: The Unrecognized Death of Speech Recognition
He didn't make any advance that has made its way to a full word recognizer, he's merely recognizing phonemes (which are linguistic subunits of words) several researchers in the field have criticized his methods. Additionally, none of the top five phoneme recognizers have ever been deployed as a word recognizer, and there is little chance that they even will be in the next few years.
mstoehr
|
16 years ago
|
on: Neural Networks - A Systematic Introduction
I'm not any sort of expert on the neural network literature but these were some papers in the last three years that caught my eye, Yann LeCun also does work on neural nets, but I haven't been all that impressed by his results. One of the main advances has been ways of developing 'deep' architectures with multiple layers rather than the traditional shallow neural networks (arguably the SVM, for instance, is actually a very cleverly trained single layer neural network)
Here's a Geoffrey Hinton paper on training deep belief networks:
http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf
Here's some stuff from Andrew Ng's group:
This paper shows how his deep belief network was able to 'learn' in an unsupervised manner certain plausible image primitives
http://robotics.stanford.edu/~ang/papers/nips07-sparsedeepbe...
This won a best paper award (application paper) and its about fast ways of building a deep belief network:
http://robotics.stanford.edu/~ang/papers/icml09-Convolutiona...
mstoehr
|
17 years ago
|
on: Google is just an amoral menace
How do you think that the character traits of the writer are relevant to his case against Google?
mstoehr
|
17 years ago
|
on: The Banker Who Said No
And even the most fancy statistical methods are generally about proving that some mild generalization of linear regression is sufficient to solve a particular problem. Non-linear models often result in a great increases in model complexity which leads to issues of overfitting, computational intractability, and instability.
mstoehr
|
17 years ago
|
on: Ask HN: Good books on machine learning?
There really isn't very much available on the practical side. So if you are looking to implement algorithms I suggest that you make use of the machine learning at ocw.mit.edu
Alternatively, if you want a good dose of a theoretical explanation of algorithms currently in use I highly recommend "Pattern Recognition and Machine Learning" by Christopher Bishop. It is definitely the best machine learning (and statistics) textbook that I have ever come across.
mstoehr
|
17 years ago
|
on: A critique of The Black Swan
The orthodox school of thought in financial economics was the efficient market hypothesis, which more or less states that the best valuation on current assets is the market price. I suppose there is something to be said for pointing out that the "best valuation" may not be very good. Taleb certainly wasn't the first to bring this up, but we probably aren't worse off for being reminded of that fact.
mstoehr
|
17 years ago
|
on: Moving to Argentina
If you're looking to enjoy the finer things in life, Chile is probably not the best place. Argentina, Peru, or Brazil would all probably be better picks.
mstoehr
|
17 years ago
|
on: Does Gödel Matter?
"This was really Godel's point: mathematics is not identical with formalism. They stand and fall separately. This is not to say that mathematics could never collapse for any reason, only that it would take a lot more than finding a paradox at the center of ZFC to make it happen."
That's significant because another great mathematician David Hilbert challenged mathematicians to come up with a complete and consistent set of axioms for all of mathematics. This was a great hope at one time that was shattered by Godel. Most people don't really talk about this old program anymore (except as a historical curiosity) because there is utterly no hope in it at all. The other implications that people try to draw from Godel's work is probably a consequence of the fact that his work sounds like it says so much more than it actually does when it's translated into normal English (and out of math-speak).
mstoehr
|
17 years ago
|
on: Does Gödel Matter?
Although it does lead to a mathematically uninteresting paradox: if you let A be the axioms of set theory and you add an axiom P which states that A proves x and not x, (i.e. set theory is inconsistent) then A' = A and P is still consistent.
mstoehr
|
17 years ago
|
on: Common Lisp + Machine Learning Internship at Google (Mountain View, CA)
I was originally an economics major, but I've managed to switch out into a major in math and a minor in CS (the study of AI was just too seductive). One area of CS theory that is pretty straightforward to pick up for an economist is algorithmic game theory: (
http://www.amazon.com/Algorithmic-Game-Theory-Noam-Nisan/dp/...). There are also some strong connections to reinforcement learning and multi-agent systems (indeed much new research in computational economics is focused on multi-agent systems).
mstoehr
|
17 years ago
|
on: Common Lisp + Machine Learning Internship at Google (Mountain View, CA)
mstoehr
|
17 years ago
|
on: After Credentials
I've definitely seen evidence that the school one goes to correlates with success, but are you sure that the causality flows in the direction: academic credential --> success?
It seems plausible at least that if the parent is wealthy, seeing to it that their kid is well-educated is a good way of signaling that their kid is decent material, it's a costly signal so only the wealthiest of parents will probably be able to afford it. And since the kids of wealthier parents tend to do better, the education signal, by some accident ends up being used a great deal to gauge future success.
I actually think that it used to just be a straightforward signal of wealth, but these days we don't want to admit that that's the case, we instead want to justify it as producing some useful economic product.
mstoehr
|
17 years ago
|
on: Google interview questions - fun brain teasers
The solution uses something called common knowledge, which is that everyone knows that everyone knows that every one knows, ad infinitum. Before the queen makes the announcement they may all know that the other husbands are cheating but they don't know if everybody else knows.
To see the difference consider the case where there are only two husbands (H1, H2) and two wives (W1,W2). So, every man has cheated on his wife, and when the queen announces that a man has been unfaithful, then consider the situation from W1's perspective. She knows that H2 has cheated on W2, she doesn't know that H1 has cheated on her, and she can't talk to W2 about it. With the announcement, from her perspective, there are two possibilities (both H1 and H2 are cheaters), or (just H2 is a cheater).
Now, to continue we need to think about what W1 will think about these two possibilities given her incomplete information:
If it were possible world where (just H2 is a cheater) and H1 is honest, then since W2 would know that H1 hasn't cheated then W2 would conclude that (just H2 is a cheater) since at least one husband had cheated. Thus, by the laws of the island W2 would kill H2 the day of the announcement.
Since W1 considers it a possibility that her husband, H1, was faithful, she won't do anything the first day; instead she'll just wait to see whether W2 kills H2.
By symmetry W2 will go through the same line of reasoning about W1. Thus, both will do nothing the first day. So then on the second day, W1 will realize that (just H2 is a cheater) must be false, since W2 didn't kill H2. So, she'll go ahead and kill W1. (Apply the same reason symmetrically to W2). Therefore, they'll both kill their husbands on the second day.
The rest of the details are pretty easy to establish. The point is that 99 days later all the men are killed.