(no title)
kashyapc | 2 months ago
It's wild to read this bit. Of course, if it quacks like a human, it's hard to resist not quacking back. As the article says, being less reckless with the vocabulary ("agents", "general intelligence", etc) could be one way to to mitigate this.
I appreciate the frank admission that the author struggled for two years. Maybe the balance of spending time with machines vs. fellow primates is out of whack. It feels dystopic to see very smart people being insidiously driven to sleep-walk into "parasocial bonds" with large language models!
It reminds me of the movie Her[1], where the guy falls "madly in love with his laptop" (as the lead character's ex-wife expresses in anguish). The film was way ahead of its time.
mjr00|2 months ago
There's a lot of black magic and voodoo and assumptions that speaking in proper English with a lot of detailed language helps, and maybe it does with some models, but I suspect most of it is a result of (sub)consciously anthropomorphizing the LLM.
Arainach|2 months ago
I've tried and fail to write this in a way that won't come across as snobbish but it is not the intent.
It's a matter of standards. Using proper language is how I think. I'm incapable of doing otherwise even out of laziness. Pressing the shift key and the space bar to do it right costs me nothing. It's akin to shopping carts in parking lots. You won't be arrested or punished for not returning the shopping cart to where it belongs, you still get your groceries (the same results), but it's what you do in a civilized society and when I see someone not doing it that says things to me about who they are as a person.
kashyapc|2 months ago
If one treats an LLM like a human, he has a bigger crisis to worry about than punctuation.
> It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc
No need for confusion. I'm one of those who does aim to write cleanly, whether I'm talking to a man or machine. English is my third language, by the way. Why the hell do I bother? Because you play like you practice! No ifs, buts, or maybes. You start writing sloppily because you go, "it's just an LLM!" You'll silently be building a bad habit and start doing that with humans.
Pay attention to your instant messaging circles (Slack and its ilk): many people can't resist hitting send without even writing a half-decent sentence. They're too eager to submit their stream of thought fragments. Sometimes I feel second-hand embarrassment for them.
tavavex|2 months ago
skydhash|2 months ago
cesarb|2 months ago
joseda-hg|2 months ago
Punctuation, capitalization, and such less so. I may be misguided, but on the set of questions and answers on the internet, I'd like to believe there is some correlation between proper punctuation and the quality of the answer.
Enough that, on longer prompts, I bother to at least clean up my prompts. (Not so often on one-offs, as you say. I treat it similar to Google: I can depend on context for the LLM to figure out I mean "phone case" instead of "phone vase.")
deafpolygon|2 months ago
the_mitsuhiko|2 months ago
It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy.
But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.
kashyapc|2 months ago
Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs:
"How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real."
An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.
mekoka|2 months ago
Why would you say pretending? I would say remembering.
tylervigen|2 months ago
I’ve found it very grounding, despite heavily using the bags of words.
[0] https://www.experimental-history.com/p/bag-of-words-have-mer...
mlinhares|2 months ago
It feels like this situation is much more worrisome as you can actually talk to the thing and it responds to you alone, so it definitely feels like there's something there.
mannanj|2 months ago
I think a lot of thinking and consideration I hear about "LLMs aren't conscious nor human" fall into this encampment to avoid our dissonance of feeling secure and top-of-the-hierarchy.
Curious what you think.
coffeefirst|2 months ago
If I’m right, the gap isn’t about what can the tool do, but the fact that some people see an electric screwdriver (which is sometimes useful) and others see what feels to them like a robot intern.