top | item 45116294

(no title)

e3bc54b2 | 5 months ago

As the other comment said, LLMs are not an abstraction.

An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder.

LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place.

discuss

order

stuartjohnson12|5 months ago

It's delegation then.

We can use different words if you like (and I'm not convinced that delegation isn't colloquially a form of abstraction) but you can't control the world by controlling the categories.

hoppp|5 months ago

Delegation of intelligence? So one party gets more stupid for the other to be smart?

robenkleene|5 months ago

One argument for abstraction being different from delegation, is when a programmer uses an abstraction, I'd expect the programmer to be able to work without the abstraction, if necessary, and also be able to build their own abstractions. I wouldn't have that expectation with delegation.

hosh|5 months ago

There is a form of delegation that develops the people involved, so that people can continue to contribute and grow. Each individual can contribute what is unique to them, and grow more capable as they do so. Both people, and the community of those people remain alive, lively, and continue to grow. Some people call this paradigm “regenerative”; only living systems regenerate.

There is another form of delegation where the work needed to be done is imposed onto another, in order to exploit and extract value. We are trying to do this with LLMs now, but we also did this during the Industrial Revolution, and before that, humanity enslaved each other to get the labor to extract value out of the land. This value extraction leads to degeneration, something that happens when living systems dies.

While the Industrial Revolution afforded humanity a middle-class, and appeared to distribute the wealth that came about — resulting in better standards of living — it came along with numerous ills that as a society, we still have not really figured out.

I think that, collectively, we figure that the LLMs can do the things no one wants to do, and so _everyone_ can enjoy a better standard of living. I think doing it this way, though, leads to a life without purpose or meaning. I am not at all convinced that LLMs are going to give us back that time … not unless we figure out how to develop AIs that help grow humans instead of replacing them.

The following article is an example of what I mean by designing an AI that helps develop people instead of replacing them: https://hazelweakly.me/blog/stop-building-ai-tools-backwards...

TheOtherHobbes|5 months ago

Human developers by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the manager is never really sure if the developer solved the problem, or if they introduced some bugs, or if their solution is robust and ideal even when it seems to be working.

All of which is beside the point, because soon-ish LLMs are going to develop their own equivalents of experimentation, formalisation of knowledge, and collective memory, and then solutions will become standardised and replicable - likely with a paradoxical combination of a huge loss of complexity and solution spaces that are humanly incomprehensible.

The arguments here are like watching carpenters arguing that a steam engine can't possibly build a table as well as they can.

Which - is you know - true. But that wasn't how industrialisation worked out.

threatofrain|5 months ago

So it's a noisy abstraction. Programmers deal with that all the time. Whenever you bring in an outside library or dependency there's an implicit contract that you don't have to look underneath the abstraction. But it's noisy so sometimes you do.

Colleagues are the same thing. You may abstract business domains and say that something is the job of your colleague, but sometimes that abstraction breaks.

Still good enough to draw boxes and arrows around.

delfinom|5 months ago

Noisy is an understatement, it's buggy, it's error filled, it's time consuming and inefficient. It's exact opposite of automation but great for job security.

soraminazuki|5 months ago

Competent programmers use well established libraries and dependencies, not ones that are unreliable as LLMs.

Paradigma11|5 months ago

"LLMs, by their very nature are probabilistic."

So are humans and yet people pay other people to write code for them.

const_cast|5 months ago

Yes but we don't call humans abstractions. A software engineer isn't an abstraction over code.

benterix|5 months ago

Yeah but in spite of that if you ask me take a Jira ticket and do it properly, there is a much higher chance that I'll do it reliably and the rest of my team will be satisfied, whereas if I bring an LLM into the equation it will wreak havoc (I've witnessed a few cases and some people got fired, not really for using LLMs but for not reviewing their output properly - which I can even understand somehow as reviewing code is much less fun than creating it).

zasz|5 months ago

Yeah and the people paying other people to write code won't understand how the code works. AI as currently deployed stands a strong chance of reducing the ranks of the next generation of talented devs.

groby_b|5 months ago

> An abstraction is a deterministic, pure function

That must be why we talk about leaky abstractions so much.

They're neither pure functions, nor are they always deterministic. We as a profession have been spoilt by mostly deterministic code (and even then, we had a chunk of probabilistic algorithms, depending on where you worked).

Heck, I've worked with compilers that used simulated annealing for optimization, 2 decades ago.

Yes, it's a sea change for CRUD/SaaS land. But there are plenty of folks outside of that who actually took the "engineering" part of software engineering seriously, and understand just fine how to deal with probabilistic processes and risk management.

pmarreck|5 months ago

> LLMs, by their very nature are probabilistic

I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input.

bdhcuidbebe|5 months ago

That only works if you decide to stick to that exact model for the rest of your life, obviously.

sarchertech|5 months ago

No one uses temperature 0 because the results are terrible.

oceanplexian|5 months ago

> LLMs, by their very nature are probabilistic.

This couldn't be any more wrong. LLMs are 100% deterministic. You just don't observe that feature because you're renting it from some cloud service. Run it on your own hardware with a consistent seed, and it will return the same answer to the same prompt every time.

maltalex|5 months ago

That’s like arguing that random number generators are not random if you give them a fixed seed. You’re splitting hairs.

LLMs, as used in practice in 99.9% of cases, are probabilistic.

kbelder|5 months ago

I think 'chaotic' is a better descriptor than 'probabilistic'. It certainly follows deterministic rules, unless randomness is deliberately injected. But the interaction of the rules and the context the operate in is so convoluted that you can't trace an exact causal relationship between the input and output.

CuriouslyC|5 months ago

Ok, let's call it a stochastic transformation over abstraction spaces. It's basically sampling from the set of deterministic transformations given the priors established by the prompt.

soraminazuki|5 months ago

You're bending over backwards to imply that it's deterministic without saying it is. It's not. LLMs, by its very nature, don't have a well-defined relationship between its input and output. It makes tons of mistakes that's utterly incomprehensible because of that.

chermi|5 months ago

Just want to commend you for the perfect way of describing this re. not being an abstraction

upcoming-sesame|5 months ago

agree but does this distinction really make a difference ? I think the OP point is still valid

glitchc|5 months ago

> LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic.

Although I'm on the side of getting my hands dirty, I'm not sure if the difference is that different. A modern compiler embeds a considerable degree of probabilistic behaviour.

ashton314|5 months ago

Compilers use heuristics which may result in dramatically different results between compiler passes. Different timing effects during compilation may constrain certain optimization passes (e.g. "run algorithm x over the nodes and optimize for y seconds") but in the end the result should still not modify defined observable behavior, modulo runtime. I consider that to be dramatically different than the probabilistic behavior we get from an LLM.

davidrupp|5 months ago

> A modern compiler embeds a considerable degree of probabilistic behaviour.

Can you give some examples?

eikenberry|5 months ago

Local models can be deterministic and that is one of the reasons why they will win out over service based models once the hardware becomes available.

bckr|5 months ago

The LLM is not part of the application.

The LLM expands the text of your design into a full application.

The commenter you’re responding to is clear that they are checking the outputs.

rajap|5 months ago

with proper testing you can make suer that given A the returned value is B

RAdrien|5 months ago

This is an excellent reply

charcircuit|5 months ago

>LLMs, by their very nature are probabilistic.

So are compilers, but people still successfully use them. Compilers and LLMs can both be made deterministic but for performance reasons it's convenient to give up that guarantee.

hn_acc1|5 months ago

AIUI, if you made an LLM deterministic, every mostly-similar prompt would return the same result (i.e. access the same training data set) and if that's wrong, the LLM is just plain broken for that example. Hacked-in "temperature" (randomness) is the only way to hopefully get a correct result - eventually.

WD-42|5 months ago

What are these non deterministic compilers I keep hearing about, honestly curious.

daveguy|5 months ago

> An abstraction is a deterministic, pure function, than when given A always returns B.

That is just not correct. There is no rule that says an abstraction is strictly functional or deterministic.

In fact, the original abstraction was likely language, which is clearly neither.

The cleanest and easiest abstractions to deal with have those properties, but they are not required.

robenkleene|5 months ago

This is such a funny example because language is the main way that we communicate with LLMs. Which means you can make tie both of your points together in the same example: If you take a scene and describe it in words, then have an LLM reconstruct the scene from the description, you'd likely get a scene that looks very different then the original source. This simultaneous makes both your point and the person you're responding to's point:

1. Language is an abstraction and it's not deterministic (it's really lossy)

2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output.

beepbooptheory|5 months ago

What is the thing that language itself abstracts?