top | item 28714561

(no title)

bhntr3 | 4 years ago

This isn't an anti-AGI argument and it doesn't disprove humans. Humans have the same problem. It's harder to write a program to do a thing than it is to just do the thing.

It's appealing to think we can just make a program that learns programs and then use that to learn to do anything computable. But this is a well studied field and it turns out that when you generalize a learning problem that way you make the learning problem a lot harder.

The space of programs that could possibly.identify dogs in images is much much much larger than the space of images that contain dogs. The images are bounded by the number of pixels in the image times the color depth. What is the space of programs bounded by? 10TB? That's roughly 256^10000000000 programs. That's just a stupidly large number.

Obviously not every 10TB string is a valid program. You can reduce that number. But what current research in program synthesis tells us is that you can't reduce it as much as you might hope.

So the point is that, just like for humans, it's easier to learn to do a thing than it is to learn to write a program to do a thing.

discuss

order

Veedrac|4 years ago

> This isn't an anti-AGI argument and it doesn't disprove humans.

You're saying you can't find intelligent programs this way because the search space is large. That's an anti-AGI argument, and it's fallacious because humans evolved.

Yes, you can only search an infinitesimal subset of the search space. The same is true for DNA. The argument is clearly invalid without at least reference to properties that gradient descent has, or that evolution has but it does not, which you have not done. It is wrong for the same reasons the watchmaker analogy is.