top | item 40188435

Malleable Software in the Age of LLMs (2023)

90 points| danecjensen | 1 year ago |geoffreylitt.com

38 comments

order

chiasson|1 year ago

> Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code

Software has been and always will be fundamentally a problem of communication. What many don't realize is that the true challenge of communication is not one of transmission, but clarity of thought and understanding. Any tool, software language, or AI coder will still be limited by the clarity and completeness of the specs presented. What is software after all but complete and precise specs? Nothing will ever turn your fuzzy intent into your clear best interest. If you don't have clarity of thought, no intelligence in the world can help you.

That's not to say I disagree with the author, I agree that more complex creative power will be accessible to a greater number of people, but I think the intrinsic efficiency limit of communication will always be more difficult than people think.

karma_pharmer|1 year ago

If you don't have clarity of thought, no intelligence in the world can help you.

I disagree.

Moronic people in positions of power are often in this situation. So far this has worked out well for the moronic powerful people: as long as they can convince intelligent people to work for them, they get to be moronic, powerful, and successful. This strategy has been working for them (as a group) for several millenia.

I think this is why the "LLMs will evolve into God In My Pocket" story gets so much traction. To a moronic person in power this sounds a lot like everyday life, but with computers cleaning up their messes instead of humans. It is totally plausible to them.

JohnFen|1 year ago

Indeed. The issue is that of precision. I think it will never be possible to write nontrivial programs by describing them in anything like a conversational human language. I think this is true even if LLMs achieve some sort of godlike intelligence.

The problem is that human languages aren't intended to communicate that sort of precision. That's one of the largest reasons why very specialized technical professions (medicine, engineering, legal, etc.) heavily employ specialized jargon. It's necessary in order to make up for the deficiencies of natural human languages for these sorts of things.

In order to create a program, very high precision of descriptions are required. Programming languages can be viewed as a variety of such jargon. If you're doing it with (say) English, you'll always have to use it in a way that makes it no longer "conversational English".

var_cw|1 year ago

> Software has been and always will be fundamentally a problem of communication.

its a problem of communication with a system. that system can be a human, a group of humans, some esoteric knowledge graph, internet or even a single webpage. i believe the author here is focusing on generative UIs from a software developer standpoint. this is just one form of interaction/communication and could be wrong to be generalized across scenarios.

i have been thinking about highly lossless and efficient forms of communication between a human and a system. keep converging to voice agents where if they feel alot human(something similar to the hume demo) then it can be the default mode of how a human interacts with a system. again, this will be a specific setting and sounds more consumer-ish. a clear contrast to what the author intends to write.

sublinear|1 year ago

The same points you made make me wonder if end users even want to modify or write a program.

Even if it's possible, it's still a lot of work. I just don't see that as the typical consumer behavior.

jbeninger|1 year ago

I wish I could remember where I saw this quote - maybe a random comment on Slashdot back in its heyday? Paraphrasing:

"When we finally get a tool that will allow people to write code in plain English, it will be discovered that most people cannot speak plain English."

greenavocado|1 year ago

Agree. Even though I only have twelve years of experience writing software professionally I started to use LLMs more and more in the past few months to write my code for me given a specification of the task. The top commercial LLMS usually do a good job but often fail catastrophically, especially on esoteric projects. I feel like I am slowly developing into a lawyer because of how thoroughly I have to spec out the problem at hand to give enough context to get exactly what I want.

sinuhe69|1 year ago

I also see that communication and a clear understanding of what you want is always the biggest hurdle in software development, not the technical issues or understanding the underlying infrastructure. Most of the time, users realize only after a long struggle that what they want is impossible or in strong conflict with other requirements. Only a human expert with experience and knowledge can advise in such matters. No generative AI will be able to point this out, not even its next generation.

So the process of creating software with the help of AI for a consumer will result in a highly stressful process with many trials and errors. Without the ability to understand the generated code, and relying only on guesswork from both sides, I highly doubt that any consumer will ever want to undertake such ventures. Even for the sake of novelty and fun.

Vegenoid|1 year ago

> Any tool, software language, or AI coder will still be limited by the clarity and completeness of the specs presented.

> Nothing will ever turn your fuzzy intent into your clear best interest.

I've seen a lot of SWEs saying this, and while it's true to an extent, it misses a lot. Good engineers don't simply turn a given spec into code, indeed there are somewhat deprecating terms for these types of positions, like 'code monkey'.

A good engineer does not require the spec they receive to be absolutely precise. They will recognize the intent, and make good judgements about what the requester most probably wants. They will ask clarifying questions when there is important information missing, or a decision that needs to be made where it isn't clear what the requester wants.

LLMs can't do this very well right now, but it doesn't seem like a stretch to say that they will be able to. Will they be able to turn half-baked, very underspecified requests into exactly what the requestor is looking for with a press of a button? No. But I think they could get, and often already are, quite good at filling in the blanks. Seems like current LLMs have a way to go before they can recognize deficiencies in a prompt and ask for more info, but it's somewhere in the future.

drdrek|1 year ago

Not only this but the author lacks enough empathy and imagination to understand how most people are wildly different than him and would not want any of this complexity in the slightest.

alphazard|1 year ago

> If you don't have clarity of thought, no intelligence in the world can help you.

This is clearly not true, just look at the tech industry. We have product managers, who in the most literal sense, do not know what they mean when they ask for something. They are paired up with people much smarter then them, who try to figure out what they mean, and then deliver that.

While this is far from ideal, it's proof that if you don't have clarity of thought, external (human level) intelligence can help you.

ickelbawd|1 year ago

I’m quite sure I don’t want a different bespoke UI for everything. And given the non-deterministic nature of these models it’s going to be different even for the same task. Now instead of hunting for the right buttons after a software “update” with some genius redesign: I’ll be hunting every time I use it? Sounds like a nightmare.

I do agree with the author though as to the deficiencies with a purely chat based interface especially for power users or technical interfaces. It’s simply a non-starter for me. That said LLMs will complement power user interfaces nicely by helping the new user learn them.

hdarshane|1 year ago

Non-deterministic nature of these models is definitely a concern. Interesting to see how companies implementing generative interface circumvent this. Maybe they create a set of fixed generic UI elements or write a system prompt describing general design guidelines. Still wouldn't fix it 100%.

goatlover|1 year ago

Wasn't this the initial promise of Smalltalk, to give end users the ability to easily modify and create their own software to fit their needs for personal computing?

igouy|1 year ago

`As with Simula leading to OOP, this encounter finally hit me with what the destiny of personal computing really was going to be. Not a personal dynamic vehicle, as in Engelbart's metaphor opposed to the IBM "railroads", but something much more profound: a personal dynamic medium. With a vehicle one could wait until high school and give "drivers ed", but if it was a medium, it had to extend into the world of childhood.`

"The Early History Of Smalltalk"

https://worrydream.com/EarlyHistoryOfSmalltalk/#p12