(no title)
ItsBob | 3 months ago
When it comes to (traditional) coding, for the most part, when I program a function to do X, every single time I run that function from now until the heat death of the sun, it will always produce Y. Forever! When it does, we understand why, and when it doesn't, we also can understand why it didn't!
When I use AI to perform X, every single time I run that AI from now until the heat death of the sun it will maybe produce Y. Forever! When it does, we don't understand why, and when it doesn't, we also don't understand why!
We know that Brenda might screw up sometimes but she doesn't run at the speed of light, isn't able to produce a thousand lines of Excel Macro in 3 seconds, doesn't hallucinate (well, let's hope she doesn't), can follow instructions etc. If she does make a mistake, we can find it, fix it, ask her what happened etc. before the damage is too great.
In short: when AI does anything at all, we only have, at best, a rough approximation of why it did it. With Brenda, it only takes a couple of questions to figure it out!
Before anyone says I'm against AI, I love it and am neck-deep in it all day when programming (not vibe-coding!) so I have a full understanding of what I'm getting myself into but I also know its limitations!
nerdjon|3 months ago
To make this even worse, it may even produce Y just enough times to make it seem reliable and then it is unleashed without supervision, running thousands or millions of times, wrecking havoc producing Z in a large number of places.
ryandrake|3 months ago
A computer program should deliver reliable, consistent output if it is consistently given the same input. If I wanted inconsistency and unreliability, I'd ask a human to do it.
cteiosanu|3 months ago
Reminds me of the famous Xerox scanning bug: https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
qazxcvbnmlp|3 months ago
a123b456c|3 months ago
fortzi|3 months ago
A4ET8a8uTh0_v2|3 months ago
But you are absolutely right about one thing. Brenda can be asked and, depending on her experience, she might give you a good idea of what might have happened. LLMs still seem to not have that 'feature'.
unknown|3 months ago
[deleted]
tekbruh9000|3 months ago
Brenda just recalls some predetermined behaviors she's lived out before. She cannot recall any given moment like we want to believe.
Ever think to ask Brenda what else she might spend her life on if these 100% ephemeral office role play "be good little missionaries for the wall street/dollar" gigs didn't exist?
You're revealing your ignorance of how people work while being anxious about our ignorance of how the machine works. You have acclimated to your ignorance well enough it seems. What's the big deal if we don't understand the AI entirely? Most drivers are not ASE certified mechanics. Most programmers are not electrical engineers. Most electrical engineers are not physicists. I can see it's not raining without being a climatologist. Experts circumlocute the language of their expertise without realizing their language does not give rise to reality. Reality gives rise to the language. So reality will be fine if we don't always have the language.
Think of a random date generator that only generates dates in your lived past. It does so. Once you read the date and confirm you were alive can you describe what you did? Oh no! You don't have memory of every moment to generate language for. Cognitive function returned null. Universe intact.
Lack of understanding how you desire is unimportant.
You think you're cherishing Brenda but really just projecting co-dependency that others LARP effort that probably doesn't really matter. It's just social gossip we were raised on so it takes up a lot of our working memory.