(no title)
chiasson | 1 year ago
Software has been and always will be fundamentally a problem of communication. What many don't realize is that the true challenge of communication is not one of transmission, but clarity of thought and understanding. Any tool, software language, or AI coder will still be limited by the clarity and completeness of the specs presented. What is software after all but complete and precise specs? Nothing will ever turn your fuzzy intent into your clear best interest. If you don't have clarity of thought, no intelligence in the world can help you.
That's not to say I disagree with the author, I agree that more complex creative power will be accessible to a greater number of people, but I think the intrinsic efficiency limit of communication will always be more difficult than people think.
karma_pharmer|1 year ago
I disagree.
Moronic people in positions of power are often in this situation. So far this has worked out well for the moronic powerful people: as long as they can convince intelligent people to work for them, they get to be moronic, powerful, and successful. This strategy has been working for them (as a group) for several millenia.
I think this is why the "LLMs will evolve into God In My Pocket" story gets so much traction. To a moronic person in power this sounds a lot like everyday life, but with computers cleaning up their messes instead of humans. It is totally plausible to them.
maxlamb|1 year ago
Terr_|1 year ago
I don't want my tools to work that way. Especially since the not-so-assistant may be making decisions to favor its parent-corporation.
> Once you were in the hands of a Grand Vizier, you were dead. Grand Viziers were always scheming megalomaniacs. It was probably in the job description: "Are you a devious, plotting, unreliable madman? Ah, good, then you can be my most trusted minister.”
-- Interesting Times by Terry Pratchett
JohnFen|1 year ago
The problem is that human languages aren't intended to communicate that sort of precision. That's one of the largest reasons why very specialized technical professions (medicine, engineering, legal, etc.) heavily employ specialized jargon. It's necessary in order to make up for the deficiencies of natural human languages for these sorts of things.
In order to create a program, very high precision of descriptions are required. Programming languages can be viewed as a variety of such jargon. If you're doing it with (say) English, you'll always have to use it in a way that makes it no longer "conversational English".
flawsofar|1 year ago
Prompt engineering and getting structured data in and out of an LLM naturally lead you to something like precise English with JSON and pseudocode syntax sprinkled in
var_cw|1 year ago
its a problem of communication with a system. that system can be a human, a group of humans, some esoteric knowledge graph, internet or even a single webpage. i believe the author here is focusing on generative UIs from a software developer standpoint. this is just one form of interaction/communication and could be wrong to be generalized across scenarios.
i have been thinking about highly lossless and efficient forms of communication between a human and a system. keep converging to voice agents where if they feel alot human(something similar to the hume demo) then it can be the default mode of how a human interacts with a system. again, this will be a specific setting and sounds more consumer-ish. a clear contrast to what the author intends to write.
sublinear|1 year ago
Even if it's possible, it's still a lot of work. I just don't see that as the typical consumer behavior.
philipswood|1 year ago
It almost does it.
Or it can just-just do it with contortions and a lot of repetitive toil on your side.
I think writing little plugins and drivers to do the thing you want with an LLM is something that could be built into a lot of software.
I don't think LLM can architect and build whole systems yet, but this niche is something that can be done.
surfingdino|1 year ago
hayley-patton|1 year ago
abecedarius|1 year ago
Some relevant background: Bonnie Nardi, A Small Matter of Programming.
jbeninger|1 year ago
"When we finally get a tool that will allow people to write code in plain English, it will be discovered that most people cannot speak plain English."
greenavocado|1 year ago
dspillett|1 year ago
This is why LLMs in their current iterations are not the danger to our jobs that some fear them to be: end users and other sake holders lack the precision to properly spec what they want (and often don't even accurately know what they want) a tool/service to do, and they don't have the time (or if they do, don't have the patience) to go back and forth iterating over the wording of the spec to get things adequately defined.
I'm currently an LLM refusenik⁰ so might be missing some context, but from my outside view I get the impression that they do a good job a simple boiler-plate-y stuff but don't save time/effort/thinking on anything much more complex. I'm sure most devs beyond the beginner are happy to have those boilerplate tasks taken off them so they can do the fun stuff, but equally end users aren't going to spend the time working with the tools to get anything more interesting than very simple programming/automation tasks done.
--
[0] I'm on the “is that really morally right?” side of the fence on how the training materials are sourced, particularly with regard to code covered by licences like the GPL family¹, and I'm anal enough to not use something I have that sort of question about even if it makes my life a little harder.
[1] If the assurances that chunks of code can't be regurgitated and that makes it all fine both morally and legally, why are none of the publicly useable LLMs such as MS's copilot trained using, say, Microsoft's Office/Windows/other code as well as public repositories? Surely they should be assured that isn't a problem as much as they want everyone else to be assured it isn't a problem?
082349872349872|1 year ago
Law is what you get when you're silly enough to attempt to spec a formal system in an informal language.
sinuhe69|1 year ago
So the process of creating software with the help of AI for a consumer will result in a highly stressful process with many trials and errors. Without the ability to understand the generated code, and relying only on guesswork from both sides, I highly doubt that any consumer will ever want to undertake such ventures. Even for the sake of novelty and fun.
Vegenoid|1 year ago
> Nothing will ever turn your fuzzy intent into your clear best interest.
I've seen a lot of SWEs saying this, and while it's true to an extent, it misses a lot. Good engineers don't simply turn a given spec into code, indeed there are somewhat deprecating terms for these types of positions, like 'code monkey'.
A good engineer does not require the spec they receive to be absolutely precise. They will recognize the intent, and make good judgements about what the requester most probably wants. They will ask clarifying questions when there is important information missing, or a decision that needs to be made where it isn't clear what the requester wants.
LLMs can't do this very well right now, but it doesn't seem like a stretch to say that they will be able to. Will they be able to turn half-baked, very underspecified requests into exactly what the requestor is looking for with a press of a button? No. But I think they could get, and often already are, quite good at filling in the blanks. Seems like current LLMs have a way to go before they can recognize deficiencies in a prompt and ask for more info, but it's somewhere in the future.
drdrek|1 year ago
alphazard|1 year ago
This is clearly not true, just look at the tech industry. We have product managers, who in the most literal sense, do not know what they mean when they ask for something. They are paired up with people much smarter then them, who try to figure out what they mean, and then deliver that.
While this is far from ideal, it's proof that if you don't have clarity of thought, external (human level) intelligence can help you.