top | item 40389006

Colorless green DNNs sleep furiously in an unexplainable fantasy

50 points| arthurtakeda | 1 year ago |cacm.acm.org

39 comments

order

roywiggins|1 year ago

> To conclude, we can never automate the construction (nor the testing) of computer programs. So says logic and so says the theory of computation.

On the other hand, the strong Church-Turing thesis is that nothing is more powerful than a Turing machine when it comes to solving halting problems, including humans. This is not proven or really provable but it seems more likely than not (humans certainly don't seem especially good at solving halting problems in general).

If the Church-Turing thesis is true (at least as applied to humans), discovering valid, halting programs by heuristic using a machine isn't barred by the halting problem as far as I can tell.

In other words, humans almost certainly aren't solving halting problems when they program, why should computers necessarily have to? Humans mess up and produce programs that don't halt or don't produce the output they expect all the time, but we still produce useful programs at least sometimes, even if we can't know for sure which programs are which.

avsteele|1 year ago

Exactly.

I'm very surprised someone a whole essay like this and have this misconception.

I was half expecting to see "humans building something smarter than human machines violates the laws of thermodynamics" next.

zer00eyz|1 year ago

>> but any talk of fully automating human intuition (Turing’s Oracle!) in coming up with novel solutions to complex problems is fantasy and the computer science police department (CSPD) should issue a citation for any such talk.

You and I both intuit that there are conjectures that, likely, do not have a solution. The author is proposing (and maybe rightly) that an AI as a Turing machine with this intuition would inherently (have to) be a Turing Oracle...

It's an interesting take that has some implications...

viccis|1 year ago

I got a flashback to Zed Shaw's infamous Turing completeness quip about Python 3, except that at least his was really just a stupid joke. Here, the author is using a motte-and-bailey fallacy in which he attacks a more complex and open ended statement ("AI will Soon Replace Programmers") by instead retreating to a more straightforward and falsifiable statement (the ability to write any computer program).

mmoskal|1 year ago

I guess the short way to say it is that "undecidable" doesn't mean "it can't ever be decided", just not always.

And of course all programs of practical significance are finite state machines (since there is only a finite number of atoms in the universe).

mlyle|1 year ago

Came here to write this, was glad to see it's already here.

All the limits that exist for computing machines almost certainly exist for us too; we don't need to solve the halting problem; we don't solve large NP-hard problems; etc.

Of course, if one believes there's something mystical about brains -- whether a soul or access to quantum computing -- perhaps one would disagree.

Kinrany|1 year ago

There's only one halting problem.

crockeo|1 year ago

The section "Forget the Theory of Computation, AI will Soon Replace Programmers" feels as if it's missing a practical slant that explains this trend. I think approaching it from a purely theoretical perspective misses the relationship between real software and real problems.

First: most practical problems are computable, and most practical programs do not run forever. Even if an agent had to run a program to termination, it could still use the feedback gathered from running it to make modifications.

Second: an agent with a sufficiently good intermediate representation _which is computable_ doesn't need to actually execute a program to model its behavior. Humans do this all the time--we can read code and understand its approximate function as a means to edit it. I don't want to claim that LLMs have a concept of "understanding," but they definitely build an intermediate representation which, when combined with external input (e.g. having to kill a program, because it exceeded a timeout), can be used to modify the program.

Now with all of that said: I don't feel confident about whether or not AI is actually serious risk to programmers, I just don't feel as if this argument is sufficiently compelling.

robomc|1 year ago

It's a ridiculous argument that seems to imply that humans can't code either.

xvilka|1 year ago

People in the comments started to attack the author's argument about halting problem, which I agree isn't the best one, but what about other arguments? They are not less damning for the current technology.

aSanchezStern|1 year ago

The claim that "there can be no explainability of such models" is also completely unsupported. We know that some simple networks can be explained, and explanation is a human notion, not a formal one, so we can't know how much explainability is possible. There's active research in explainability of neural networks and progress is being made. I don't know how far it'll go, but it hasn't hit any sort of theoretical boundary yet. I think the author is making a similar sort of mistake that they do in invoking the halting problem here, confusing a "not forall" for a "forall not". There are certainly neural network models that can't be explained ("not every model can be explained" is true), but that doesn't mean that there aren't ones that can ("every model can not be explained" is not true).

As part of the argument on explainability, the author says "we have known for some time these models cannot represent or model symbolic structures". There might be some particular weird way of defining things where this is true, but it's certainly not true in the most basic reading. In fact, we have very strong theoretical results that a sufficiently wide neural network can compute any function, including those that are "symbolic". Whether they can be trained using gradient optimization to compute every function is an open problem, but the idea that there are functions they can't represent is provably false.

The third point, that LLM's aren't as step towards AGI. They again make the claim that there are functions a DNN can't compute (or approximate arbitrarily well) (see https://en.wikipedia.org/wiki/Universal_approximation_theore... for the theorem that this isn't true).

The rest of this point, and the fourth point, are basically just about how current LLM's are actually pretty dumb in a lot of situations. This is the only argument that's actually compelling to me; there's a lot of hype around LLMs and their abilities, but a lot of that might have to do with our brains being happy to paper over flaws to anthropomorphize things. And the fact that we haven't yet learned what to look for in artificial output, like we spent decades doing for other machine outputs before LLMs. Recall that when dumb pattern matching conversation bots were invented in the 60s (https://en.wikipedia.org/wiki/ELIZA), people thought they couldn't possibly be artificial and must really be human, even though they are obviously artificial by todays standards.

So, I agree that we don't know if LLM's are the first step towards AGI, and they probably aren't in a sense more than the fact that inventions tend to build on each other. But we don't have enough information to say definitively that they aren't that first step.

alkonaut|1 year ago

How does AI programming relate to computability and halting? I can't determine whether a program will halt either. But I can kill it if it doesn't halt and just try another program, because I can operate a stopwatch. Where does this recursive argument form? We can already see LLMs solving simple programmer tasks: creating software from an informal description.

jpasmore|1 year ago

i stopped reading the article at this point...

Havoc|1 year ago

> Automatically discovering formal (i.e., unambiguous) solutions to novel problems can never be automated.

Things like alphafold would suggest to me that this doesn’t always hold true.

Perhaps a matter of definition but to me there does seem to be “some” problem solving ability. And once you’ve got even a little of that then it’s a scaling question and not really compatible with “ can never be automated.”

soist|1 year ago

Do you know about the manifold hypothesis and why that's the inductive bias in current neural networks?

hcarvalhoalves|1 year ago

I’m not sure the Halting Problem is a compelling argument theoretical reason why AI can’t program.

I do see a compelling argument on the limits of LLMs being probabilistic (rather than logic/symbolic), as I’ve seen on GPT/Pilot coming up with code that looks syntactically valid but calls completely made-up APIs.

Retr0id|1 year ago

> Let us suppose some AI came up with an algorithm A to solve a specific problem P. To ensure that A solves P, the AI has to execute A to test its functionality.

Not necessarily. It could (in theory) prove through formal methods that A solves P.

PartiallyTyped|1 year ago

Taking this a step further, if the AI could write Daphne, it could rely on invariants and pre and post conditions for this proof, no?

piannucci|1 year ago

I don’t see how the author’s arguments about impossibility results pertaining to “distributed sub-symbolic architectures” apply any more strongly to LLMs or DNNs than they do to human brains. Human programmers aren’t magically capable of solving the halting problem either, but we muddle through somehow.

EnigmaFlare|1 year ago

Yea. Most of what we do isn't that rigorous and it's fine. When we do need rigor, we use external tools, like classical computer programs or writing math down on paper. LLMs can use external tools too. We're also hopeless at explainability - people usually have no idea why they make most of the decisions they do. If we have to, we try to rationalize but it's not really correct because it doesn't reproduce all the intuition that went into it. Yet somehow we can still write software!

hoseja|1 year ago

>"AI is useless"

>Lists a bunch of totally human failings

Great argument.

qarl|1 year ago

OMG. Did he just use the halting problem to suggest that programmers won't be replaced by AI?

Dude, 99.999% of programming today is writing HTML.

jdkee|1 year ago

Did Gary Marcus write this?