top | item 40421998

(no title)

piannucci | 1 year ago

I don’t see how the author’s arguments about impossibility results pertaining to “distributed sub-symbolic architectures” apply any more strongly to LLMs or DNNs than they do to human brains. Human programmers aren’t magically capable of solving the halting problem either, but we muddle through somehow.

discuss

order

EnigmaFlare|1 year ago

Yea. Most of what we do isn't that rigorous and it's fine. When we do need rigor, we use external tools, like classical computer programs or writing math down on paper. LLMs can use external tools too. We're also hopeless at explainability - people usually have no idea why they make most of the decisions they do. If we have to, we try to rationalize but it's not really correct because it doesn't reproduce all the intuition that went into it. Yet somehow we can still write software!