Basically nobody was using automated gradient descent / etc because of the proclivity of these algorithms to get stuck on a boundary. The problem is the boundaries are not well defined. One example might be a catastrophic instability. If it gets triggered it has the potential to damage the machine. But the exact parameters in which the instability occurs are not well known. So with this algorithm you mix the best of both worlds: the human can guide away from the areas where we think instabilities are, the machine can do it's optimization thing. It's pretty simple overall but enables a big shift in how experiments are run.Edit to add: these instabilities often look just like better performance on a shot-to-shot basis, which makes the algos especially tricky. Using a human we could say "this parameter change is just feeding the instability" vs "oh this is interesting go here"
noobermin|8 years ago
The only thing I think that can lead someone to your conclusion is they can judge based on a host of criteria, not just a pre-defined set of criteria--may be that's what you meant. Of course, intuitively, changing your criteria midstream would lead to bias in your judgement, I'd think, but that may be the real innovation here, that is hard to do without a human judge in the mix.
erikpukinskis|8 years ago
Why? Humans have a much richer modeling apparatus than any computer does right now. We can draw on a very large and yet almost fully tuned to reality set of possible models simultaneously. You can estimate the number of available models as whatever number of neurons you have, in combinatorial. We also have machinery for searching that entire model space simultaneously and testing against a continuous stream of megabytes of data in realtime, in order to find good fits.
Existing AIs wouldn't even know where to start. They can apply infinite models, but have no grounding in reality, and no way to choose amongst them. The AI doesn't even have an intrinsic sense of space, seeing has how it lacks a body. It's a very fast worker that can get things done when you give it very specific instructions, but it has no real ability to understand what it is doing or why it would want to do something different.
regularfry|8 years ago
hyperbovine|8 years ago
ouid|8 years ago