top | item 10590382

(no title)

mbq | 10 years ago

We do know what goes on in them. They are just trying (more-less, but it is only a matter of speed) random solutions till good enough, where the human operator decides what is "good". Which is fundamental, because when you have a function f you know nothing about, the _only_ thing you can do optimise it is to sample randomly, keep current best solution and hope it is good enough. Anything smarter would require some knowledge or assumptions, so is impossible to apply.

In the even more meta direction, the question is though whether human intelligence is some mystical emergent magic, or just try-till-good-enough massive optimisation of physiological needs plus some bonus for social behaviour sponsored by evolution plus some random noise, hidden behind a self-illusion of being a real thing, similar to consciousness. This idea is obviously somewhat disturbing; it shows that success is only a matter of luck, resourcefulness depends on environment, motives are never really noble, apes are only less successful than us because they can't (yet?) efficiently store and share information and art is a matter of an accidental conflux of random biases. On the other hand it suggests that singularity is nonsense, even more, that AGIs will become self-crippled with similar flaws that we observe within ourselves.

discuss

order

No comments yet.