top | item 40207088

(no title)

chiasson | 1 year ago

> Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code

Software has been and always will be fundamentally a problem of communication. What many don't realize is that the true challenge of communication is not one of transmission, but clarity of thought and understanding. Any tool, software language, or AI coder will still be limited by the clarity and completeness of the specs presented. What is software after all but complete and precise specs? Nothing will ever turn your fuzzy intent into your clear best interest. If you don't have clarity of thought, no intelligence in the world can help you.

That's not to say I disagree with the author, I agree that more complex creative power will be accessible to a greater number of people, but I think the intrinsic efficiency limit of communication will always be more difficult than people think.

discuss

order

karma_pharmer|1 year ago

If you don't have clarity of thought, no intelligence in the world can help you.

I disagree.

Moronic people in positions of power are often in this situation. So far this has worked out well for the moronic powerful people: as long as they can convince intelligent people to work for them, they get to be moronic, powerful, and successful. This strategy has been working for them (as a group) for several millenia.

I think this is why the "LLMs will evolve into God In My Pocket" story gets so much traction. To a moronic person in power this sounds a lot like everyday life, but with computers cleaning up their messes instead of humans. It is totally plausible to them.

maxlamb|1 year ago

Maybe it's getting into semantics but clarity of thought does not necessarily mean accurate understanding of reality (which is usually messy and complex), so moronic people could ironically have more "clarity of thought" than non-moronic people because they tend to have simple beliefs (these X people are good, but those Y people are bad).

Terr_|1 year ago

That sounds less like helping the person overcome their problem and more like secretly demoting them into a figurehead while the devious Grand Vizier actually steers things.

I don't want my tools to work that way. Especially since the not-so-assistant may be making decisions to favor its parent-corporation.

> Once you were in the hands of a Grand Vizier, you were dead. Grand Viziers were always scheming megalomaniacs. It was probably in the job description: "Are you a devious, plotting, unreliable madman? Ah, good, then you can be my most trusted minister.”

-- Interesting Times by Terry Pratchett

JohnFen|1 year ago

Indeed. The issue is that of precision. I think it will never be possible to write nontrivial programs by describing them in anything like a conversational human language. I think this is true even if LLMs achieve some sort of godlike intelligence.

The problem is that human languages aren't intended to communicate that sort of precision. That's one of the largest reasons why very specialized technical professions (medicine, engineering, legal, etc.) heavily employ specialized jargon. It's necessary in order to make up for the deficiencies of natural human languages for these sorts of things.

In order to create a program, very high precision of descriptions are required. Programming languages can be viewed as a variety of such jargon. If you're doing it with (say) English, you'll always have to use it in a way that makes it no longer "conversational English".

flawsofar|1 year ago

I think LLMs are already influencing the development of somethin between natural language and a formal language.

Prompt engineering and getting structured data in and out of an LLM naturally lead you to something like precise English with JSON and pseudocode syntax sprinkled in

var_cw|1 year ago

> Software has been and always will be fundamentally a problem of communication.

its a problem of communication with a system. that system can be a human, a group of humans, some esoteric knowledge graph, internet or even a single webpage. i believe the author here is focusing on generative UIs from a software developer standpoint. this is just one form of interaction/communication and could be wrong to be generalized across scenarios.

i have been thinking about highly lossless and efficient forms of communication between a human and a system. keep converging to voice agents where if they feel alot human(something similar to the hume demo) then it can be the default mode of how a human interacts with a system. again, this will be a specific setting and sounds more consumer-ish. a clear contrast to what the author intends to write.

sublinear|1 year ago

The same points you made make me wonder if end users even want to modify or write a program.

Even if it's possible, it's still a lot of work. I just don't see that as the typical consumer behavior.

philipswood|1 year ago

Software often doesn't do the thing you want to do.

It almost does it.

Or it can just-just do it with contortions and a lot of repetitive toil on your side.

I think writing little plugins and drivers to do the thing you want with an LLM is something that could be built into a lot of software.

I don't think LLM can architect and build whole systems yet, but this niche is something that can be done.

surfingdino|1 year ago

I have yet to meet a non-dev who wants to write or modify software themselves. Quite a few people still don't know that it's the software that makes appliances and devices do things and that it is written. They just don't know what software is so how would they know where to begin writing or modifying code?

hayley-patton|1 year ago

The typical consumer isn't empowered to modify or write a program, so why would they entertain the idea?

abecedarius|1 year ago

Imagine if practically all the writing you encountered was in printed books and magazines. Talk about "personal writing", how it might change things, how to get there, would get back puzzled looks and "Even if it's possible, it's still a lot of work. I just don't see that as the typical consumer behavior." It is a lot of work to produce a book.

Some relevant background: Bonnie Nardi, A Small Matter of Programming.

jbeninger|1 year ago

I wish I could remember where I saw this quote - maybe a random comment on Slashdot back in its heyday? Paraphrasing:

"When we finally get a tool that will allow people to write code in plain English, it will be discovered that most people cannot speak plain English."

greenavocado|1 year ago

Agree. Even though I only have twelve years of experience writing software professionally I started to use LLMs more and more in the past few months to write my code for me given a specification of the task. The top commercial LLMS usually do a good job but often fail catastrophically, especially on esoteric projects. I feel like I am slowly developing into a lawyer because of how thoroughly I have to spec out the problem at hand to give enough context to get exactly what I want.

dspillett|1 year ago

> because of how thoroughly I have to spec out the problem at hand to give enough context to get exactly what I want

This is why LLMs in their current iterations are not the danger to our jobs that some fear them to be: end users and other sake holders lack the precision to properly spec what they want (and often don't even accurately know what they want) a tool/service to do, and they don't have the time (or if they do, don't have the patience) to go back and forth iterating over the wording of the spec to get things adequately defined.

I'm currently an LLM refusenik⁰ so might be missing some context, but from my outside view I get the impression that they do a good job a simple boiler-plate-y stuff but don't save time/effort/thinking on anything much more complex. I'm sure most devs beyond the beginner are happy to have those boilerplate tasks taken off them so they can do the fun stuff, but equally end users aren't going to spend the time working with the tools to get anything more interesting than very simple programming/automation tasks done.

--

[0] I'm on the “is that really morally right?” side of the fence on how the training materials are sourced, particularly with regard to code covered by licences like the GPL family¹, and I'm anal enough to not use something I have that sort of question about even if it makes my life a little harder.

[1] If the assurances that chunks of code can't be regurgitated and that makes it all fine both morally and legally, why are none of the publicly useable LLMs such as MS's copilot trained using, say, Microsoft's Office/Windows/other code as well as public repositories? Surely they should be assured that isn't a problem as much as they want everyone else to be assured it isn't a problem?

082349872349872|1 year ago

> I feel like I am slowly developing into a lawyer

Law is what you get when you're silly enough to attempt to spec a formal system in an informal language.

sinuhe69|1 year ago

I also see that communication and a clear understanding of what you want is always the biggest hurdle in software development, not the technical issues or understanding the underlying infrastructure. Most of the time, users realize only after a long struggle that what they want is impossible or in strong conflict with other requirements. Only a human expert with experience and knowledge can advise in such matters. No generative AI will be able to point this out, not even its next generation.

So the process of creating software with the help of AI for a consumer will result in a highly stressful process with many trials and errors. Without the ability to understand the generated code, and relying only on guesswork from both sides, I highly doubt that any consumer will ever want to undertake such ventures. Even for the sake of novelty and fun.

Vegenoid|1 year ago

> Any tool, software language, or AI coder will still be limited by the clarity and completeness of the specs presented.

> Nothing will ever turn your fuzzy intent into your clear best interest.

I've seen a lot of SWEs saying this, and while it's true to an extent, it misses a lot. Good engineers don't simply turn a given spec into code, indeed there are somewhat deprecating terms for these types of positions, like 'code monkey'.

A good engineer does not require the spec they receive to be absolutely precise. They will recognize the intent, and make good judgements about what the requester most probably wants. They will ask clarifying questions when there is important information missing, or a decision that needs to be made where it isn't clear what the requester wants.

LLMs can't do this very well right now, but it doesn't seem like a stretch to say that they will be able to. Will they be able to turn half-baked, very underspecified requests into exactly what the requestor is looking for with a press of a button? No. But I think they could get, and often already are, quite good at filling in the blanks. Seems like current LLMs have a way to go before they can recognize deficiencies in a prompt and ask for more info, but it's somewhere in the future.

drdrek|1 year ago

Not only this but the author lacks enough empathy and imagination to understand how most people are wildly different than him and would not want any of this complexity in the slightest.

alphazard|1 year ago

> If you don't have clarity of thought, no intelligence in the world can help you.

This is clearly not true, just look at the tech industry. We have product managers, who in the most literal sense, do not know what they mean when they ask for something. They are paired up with people much smarter then them, who try to figure out what they mean, and then deliver that.

While this is far from ideal, it's proof that if you don't have clarity of thought, external (human level) intelligence can help you.