top | item 40008772

(no title)

bb86754 | 1 year ago

Except that's not what I'm saying and it does in fact matter what's under the hood if you're looking for a scientific, causal explanation of organic intelligence. I know that AIs are useful, and that they can be logically sounds and make real world contributions. That's not what the article is arguing against. Human reasoning, by the way, is much more complicated than any of these things.

The article states that AI will never reach human intelligence, which LeCun defines as "reasoning, planning, persistent memory, and understanding the physical world."

I would argue that's still an extremely narrow definition of human intelligence. Even ignoring semantics current AIs cannot do any of those things, and to my lights never will for the same reasons LeCun says.

discuss

order

barfbagginus|1 year ago

Thank you for your response!

It seems that you express two critical needs which I don't share:

1. You need human analogous AI intelligence to provide a casual explanation for human intelligence.

But it doesn't have to provide this to be human analogous. It just has to perform functions a human can.

2. You need AI intelligence to never have memory, planning, persistence, and physical understanding.

But it demonstrably has all these to various degrees already. We just need simple bolt-on modules like RAG (persistence, understanding), action/critique loops and tool using (reasoning, planning, understanding). And there are clear paths for increasing the functionally in each of these dimensions.

Functionally, AI is evolving, and there are no clear blockers against this process.

It seems that at some point you have to say that functionalism is not enough. There must be a soul that AI will still be missing, even if functional equivalence is there.

If the AI achieves functional abilities similar to humans - which let's grant seems possible for every function we can identify - then you will have to retreat to claiming there is some "je ne sais quoi" which is not captured.

In other words, you will have to argue that the human soul is real.

Is that a length you're ready to go to? Is your position that science can't explain the human soul, even if it can simulate all human functions?

Or are there, in your view, functional limits that, if we reach them, you will admit "this is enough. I was wrong"?

That's my first question to you.

I would also like to point out that LeCunn thinks AI can eventually be human analogous. Specifically LeCunn argues that his own JEPA model can achieve these things, because it has a constantly learning world model, planning/critique model, memory model, and actor model. He criticizes transformer based LLMs mainly because simple transformers can't learn in an ongoing way.

Are you comfortable admitting that LeCunn is trying to promote his own work, and believes it can reach human intelligence levels? If not, what specifically makes you feel LeCunn is on your side here?

That is my second question to you.