(no title)
e3bc54b2 | 5 months ago
An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder.
LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place.
stuartjohnson12|5 months ago
We can use different words if you like (and I'm not convinced that delegation isn't colloquially a form of abstraction) but you can't control the world by controlling the categories.
hoppp|5 months ago
robenkleene|5 months ago
hosh|5 months ago
There is another form of delegation where the work needed to be done is imposed onto another, in order to exploit and extract value. We are trying to do this with LLMs now, but we also did this during the Industrial Revolution, and before that, humanity enslaved each other to get the labor to extract value out of the land. This value extraction leads to degeneration, something that happens when living systems dies.
While the Industrial Revolution afforded humanity a middle-class, and appeared to distribute the wealth that came about — resulting in better standards of living — it came along with numerous ills that as a society, we still have not really figured out.
I think that, collectively, we figure that the LLMs can do the things no one wants to do, and so _everyone_ can enjoy a better standard of living. I think doing it this way, though, leads to a life without purpose or meaning. I am not at all convinced that LLMs are going to give us back that time … not unless we figure out how to develop AIs that help grow humans instead of replacing them.
The following article is an example of what I mean by designing an AI that helps develop people instead of replacing them: https://hazelweakly.me/blog/stop-building-ai-tools-backwards...
TheOtherHobbes|5 months ago
All of which is beside the point, because soon-ish LLMs are going to develop their own equivalents of experimentation, formalisation of knowledge, and collective memory, and then solutions will become standardised and replicable - likely with a paradoxical combination of a huge loss of complexity and solution spaces that are humanly incomprehensible.
The arguments here are like watching carpenters arguing that a steam engine can't possibly build a table as well as they can.
Which - is you know - true. But that wasn't how industrialisation worked out.
threatofrain|5 months ago
Colleagues are the same thing. You may abstract business domains and say that something is the job of your colleague, but sometimes that abstraction breaks.
Still good enough to draw boxes and arrows around.
delfinom|5 months ago
soraminazuki|5 months ago
Paradigma11|5 months ago
So are humans and yet people pay other people to write code for them.
const_cast|5 months ago
benterix|5 months ago
zasz|5 months ago
groby_b|5 months ago
That must be why we talk about leaky abstractions so much.
They're neither pure functions, nor are they always deterministic. We as a profession have been spoilt by mostly deterministic code (and even then, we had a chunk of probabilistic algorithms, depending on where you worked).
Heck, I've worked with compilers that used simulated annealing for optimization, 2 decades ago.
Yes, it's a sea change for CRUD/SaaS land. But there are plenty of folks outside of that who actually took the "engineering" part of software engineering seriously, and understand just fine how to deal with probabilistic processes and risk management.
pmarreck|5 months ago
I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input.
bdhcuidbebe|5 months ago
sarchertech|5 months ago
unknown|5 months ago
[deleted]
oceanplexian|5 months ago
This couldn't be any more wrong. LLMs are 100% deterministic. You just don't observe that feature because you're renting it from some cloud service. Run it on your own hardware with a consistent seed, and it will return the same answer to the same prompt every time.
maltalex|5 months ago
LLMs, as used in practice in 99.9% of cases, are probabilistic.
kbelder|5 months ago
CuriouslyC|5 months ago
soraminazuki|5 months ago
chermi|5 months ago
upcoming-sesame|5 months ago
glitchc|5 months ago
Although I'm on the side of getting my hands dirty, I'm not sure if the difference is that different. A modern compiler embeds a considerable degree of probabilistic behaviour.
ashton314|5 months ago
davidrupp|5 months ago
Can you give some examples?
eikenberry|5 months ago
bckr|5 months ago
The LLM expands the text of your design into a full application.
The commenter you’re responding to is clear that they are checking the outputs.
rajap|5 months ago
RAdrien|5 months ago
charcircuit|5 months ago
So are compilers, but people still successfully use them. Compilers and LLMs can both be made deterministic but for performance reasons it's convenient to give up that guarantee.
hn_acc1|5 months ago
WD-42|5 months ago
daveguy|5 months ago
That is just not correct. There is no rule that says an abstraction is strictly functional or deterministic.
In fact, the original abstraction was likely language, which is clearly neither.
The cleanest and easiest abstractions to deal with have those properties, but they are not required.
robenkleene|5 months ago
1. Language is an abstraction and it's not deterministic (it's really lossy)
2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output.
beepbooptheory|5 months ago