(no title)
dirkc | 14 days ago
Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.
But this will only happen after the last programmer has died and no-one will remember programming languages, compilers, etc. The LLM orbiting in space will essentially just call GCC to execute the 'prompt' and spend the rest of the time pondering its existence ;p
tzs|14 days ago
The Asimov story is on the Internet Archive here [1]. That looks like it is from a handout in a class or something like that and has an introductory paragraph added which I'd recommend skipping.
There is no space between the end of that added paragraph and the first paragraph of the story, so what looks like the first paragraph of the story is really the second. Just skip down to that, and then go up 4 lines to the line that starts "Jehan Shuman was used to dealing with the men in authority [...]". That's where the story starts.
[1] https://ia800806.us.archive.org/20/items/TheFeelingOfPower/T...
dirkc|14 days ago
The story I have half a mind to write is along the lines of a future we envision already being around us, just a whole lot messier. Something along the lines of this [2] XKCB.
[1] https://en.wikipedia.org/wiki/A_Canticle_for_Leibowitz
[2] https://xkcd.com/538/
xjm|14 days ago
https://hex.ooo/library/power.html
ungreased0675|14 days ago
nilamo|14 days ago
Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc
atroon|14 days ago
I think there's a substantial subset of tech companies and honestly tech people who disagree. Not openly, but in the sense of 'the purpose of a system is what it does'.
oxygen_crisis|14 days ago
Effective collaboration relies on iterating over clarifications until ambiguity is acceptably resolved.
Rather than spending orders of magnitude more effort moving forward with bad assumptions from insufficient communication and starting over from scratch every time you encounter the results of each misunderstanding.
Most AI models still seem deep into the wrong end of that spectrum.
Dylan16807|14 days ago
That wasn't the point at all. The idea is about rediscovering what always worked to make a computer useful, and not even using the fuzzy AI logic.
idiotsecant|14 days ago
skjoldr|14 days ago
I'm on the spectrum and I definitely prefer structured interaction with various computer systems to messy human interaction :) There are people not on the spectrum who are able to understand my way of thinking (and vice versa) and we get along perfectly well.
Every human has their own quirks and the capacity to learn how to interact with others. AI is just another entity that stresses this capacity.
trollbridge|14 days ago
stvltvs|14 days ago
So no abstract reasoning.
shagie|14 days ago
On the foolishness of "natural language programming". https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
(and it continues for some many paragraphs)https://news.ycombinator.com/item?id=8222017 2014 - 154 comments
https://news.ycombinator.com/item?id=35968148 2023 - 65 comments
https://news.ycombinator.com/item?id=43564386 2025 - 277 comments
WarmWash|14 days ago
You see people complaining about LLM ability, and then you see their prompt, and it's the 2006 equivalent of googling "I need to know where I can go for getting the fastest service for car washes in Toronto that does wheel washing too"
dbdr|14 days ago
Sharlin|14 days ago
"Communication usually fails, except by accident." —Osmo A. Wiio [1]
[1] https://en.wikipedia.org/wiki/Wiio%27s_laws
lostphilosopher|14 days ago
nradov|14 days ago
https://en.wikipedia.org/wiki/Lojban
jfengel|14 days ago
gorjusborg|14 days ago
If we're 'lucky' there will still be some 'priests' around like in the Foundation novels. They don't understand how anything works either, but can keep things running by following the required rituals.
alistairSH|14 days ago
So, back to COBOL? :)
chasd00|14 days ago
well more like a structured _querying_ language
grumbel|14 days ago
That has been tried for almost half a century in the form of Cyc[1] and never accomplished much.
The proper solution here is to provide the LLM with more context, context that will likely be collected automatically by wearable devices, screen captures and similar pervasive technology in the not so distant future.
This kind of quick trick questions are exactly the same thing humans fail at if you just ask them out of the blue without context.
[1] https://en.wikipedia.org/wiki/Cyc
sensanaty|14 days ago
We've truly gone full circle here, except now our programming languages have a random chance for an operator to do the opposite of what the operator does at all other times!
ddevnyc|14 days ago
milesvp|14 days ago
This can still be a really big win, because of other things that tend to be boiler around the core logic, but it's certainly not the panacea that everyone who is largely incapable of being precise with language thinks it is.
butlike|13 days ago
bbbhammy|14 days ago
3abiton|12 days ago
YeGoblynQueenne|14 days ago
Like a programming language? But that's the whole point of LLMs, that you can give instructions to a computer using natural language, not a formal language. That's what makes those systems "AI", right? Because you can talk to them and they seem to understand what you're saying, and then reply to you and you can understand what they're saying without any special training. It's AI! Like the Star Trek[1] computer!
The truth of course is that as soon as you want to do something more complicated than a friendly chat you find that it gets harder and harder to communicate what it is you want exactly. Maybe that's because of the ambiguity of natural language, maybe it's because "you're prompting it wrong", maybe it's because the LLM doesn't really understand anything at all and it's just a stochastic parrot. Whatever the reason, at that point you find yourself wishing for a less ambiguous way of communication, maybe a formal language with a full spec and a compiler, and some command line flags and debug tokens etc... and at that point it's not a wonderful AI anymore but a Good, Old-Fashioned Computer, that only does what you want if you can find exactly the right way to say it. Like asking a Genie to make your wishes come true.
______________
[1] TNG duh.
Dylan16807|14 days ago
Does the next paragraph not make that clear?
operator_nil|14 days ago
[deleted]