top | item 44406755

(no title)

awb | 8 months ago

Interesting read and thanks for sharing.

Two observations:

1. Natural language appears to be to be the starting point of any endeavor.

2.

> It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.

LLMs are trying to replicate all of the intellect in the world.

I’m curious if the author would consider that these lofty caveats may be more plausible today than they were when the text was written.

discuss

order

roxolotl|8 months ago

One thing I’d add to all of the other comments is just to reflect on experience. Maybe I’ve mostly worked with people who are incompetent with natural language. But assuming that I’ve mostly worked with average people it’s astonishing how common miscommunication is amongst experts when discussing changes to software. I’ve always found the best way to avoid that is to drop into a more structured language. You see this with most communication tools. They add structure to avoid miscommunication.

bwfan123|8 months ago

> I’m curious if the author would consider that these lofty caveats may be more plausible today than they were when the text was written.

What is missed by many and highlighted in the article is the following: that there is no way to be "precise" with natural languages. The "operational definition" of precision involves formalism. For example, I could describe to you in english how an algorithm works, and maybe you understand it. But for you to precisely run that algorithm requires some formal definition of a machine model and steps involved to program it.

The machine model for english is undefined ! and this could be considered a feature and not a bug. ie, It allows a rich world of human meaning to be communicated. Whereas, formalism limits what can be done and communicated in that framework.

skydhash|8 months ago

I forgot where I read it, but the reason that natural languages works so well for communication is because the terms are labels for categories instead of identifiers. You can concatenate enough to refer to a singleton, but for the person in front, it can be many items or an empty set. Some labels may even be nonexistent in their context

So when we want deterministic process, we invent a set of labels where each is a singleton. Alongside them is a set of rules that specify how to describe their transformation. Then we invented machines that can interpret those instructions. The main advantage was that we know the possible outputs (assuming a good reliability) before we even have to act.

LLMs don't work so well in that regard, as while they have a perfect embedding of textual grammar rules, they don't have a good representation for what those labels refers to. All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.

akavi|8 months ago

But for most human endeavors, "operational precision" is a useful implementation detail, not a fundamental requirement.

We want software to be operationally precise because it allows us to build up towers of abstractions without needing to worry about leaks (even the leakiest software abstraction is far more watertight than any physical "abstraction").

But, at the level of the team or organization that's _building_ the software, there's no such operational precision. Individuals communicating with each other drop down to such precision when useful, but at any endeavor larger than 2-3 people, the _vast_ majority of communication occurs in purely natural language. And yet, this still generates useful software.

The phase change of LLMs is that they're computers that finally are "smart" enough to engage at this level. This is fundamentally different from the world Dijkstra was living in.

3abiton|8 months ago

On that note, I wonder if having LLM agents communicating with each others in a human language rather than latent space is a big limitation.