top | item 46354593

(no title)

kashyapc | 2 months ago

"Because LLMs now not only help me program, I'm starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. I find this odd and discomforting [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer."

It's wild to read this bit. Of course, if it quacks like a human, it's hard to resist not quacking back. As the article says, being less reckless with the vocabulary ("agents", "general intelligence", etc) could be one way to to mitigate this.

I appreciate the frank admission that the author struggled for two years. Maybe the balance of spending time with machines vs. fellow primates is out of whack. It feels dystopic to see very smart people being insidiously driven to sleep-walk into "parasocial bonds" with large language models!

It reminds me of the movie Her[1], where the guy falls "madly in love with his laptop" (as the lead character's ex-wife expresses in anguish). The film was way ahead of its time.

[1] https://www.imdb.com/title/tt1798709/

discuss

order

mjr00|2 months ago

It helps a lot if you treat LLMs like a computer program instead of a human. It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc. I've never had issues getting results I've wanted with much simpler prompts like (looking at my own history here) "python grpc oneof pick field", "mysql group by mmyy of datetime", "python isinstance literal". Basically the same way I would use Google; after all, you just type in "toledo forecast" instead of "What is the weather forecast for the next week in Toledo, Ohio?", don't you?

There's a lot of black magic and voodoo and assumptions that speaking in proper English with a lot of detailed language helps, and maybe it does with some models, but I suspect most of it is a result of (sub)consciously anthropomorphizing the LLM.

Arainach|2 months ago

> It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc.

I've tried and fail to write this in a way that won't come across as snobbish but it is not the intent.

It's a matter of standards. Using proper language is how I think. I'm incapable of doing otherwise even out of laziness. Pressing the shift key and the space bar to do it right costs me nothing. It's akin to shopping carts in parking lots. You won't be arrested or punished for not returning the shopping cart to where it belongs, you still get your groceries (the same results), but it's what you do in a civilized society and when I see someone not doing it that says things to me about who they are as a person.

kashyapc|2 months ago

> It helps a lot if you treat LLMs like a computer program instead of a human.

If one treats an LLM like a human, he has a bigger crisis to worry about than punctuation.

> It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc

No need for confusion. I'm one of those who does aim to write cleanly, whether I'm talking to a man or machine. English is my third language, by the way. Why the hell do I bother? Because you play like you practice! No ifs, buts, or maybes. You start writing sloppily because you go, "it's just an LLM!" You'll silently be building a bad habit and start doing that with humans.

Pay attention to your instant messaging circles (Slack and its ilk): many people can't resist hitting send without even writing a half-decent sentence. They're too eager to submit their stream of thought fragments. Sometimes I feel second-hand embarrassment for them.

tavavex|2 months ago

I've always used "proper" sentences for LLMs since day 1. I think I do a good job at not anthropomorphizing them. It's just software. However, that doesn't mean you have to use it in the exact same ways as other software. LLMs are trained on mostly human-made texts, which I imagine are far more rich with proper sentences than Google search queries. I don't doubt that modern models will usually give you at least something sensible no matter the query, but I always assumed that the results would be better if the input was more similar to its training data and was worded in a crystal-clear manner, without trying to get it to fill the blanks. After all, I'm not searching for web pages by listing down some disconnected keywords, I want a specific output that logically follows from my input.

skydhash|2 months ago

Very much this. My guess is that common words like article have very impact as they just occurs too frequently. If the LLM can generate a book, then your prompt should be like the index of that book instead of the abstract.

cesarb|2 months ago

It makes sense if you think of a prompt not as a way of telling the LLM what to do (like you would with a human), but instead as a way of steering its "autocomplete" output towards a different part of the parameter space. For instance, the presence of the word "mysql" should steer it towards outputs related to MySQL (as seen on its training data); it shouldn't matter much whether it's "mysql" or "MYSQL" or "MySQL", since all these alternatives should cluster together and therefore have a similar effect.

joseda-hg|2 months ago

Greetings, thanks, and other pleasantries feel rather pointless.

Punctuation, capitalization, and such less so. I may be misguided, but on the set of questions and answers on the internet, I'd like to believe there is some correlation between proper punctuation and the quality of the answer.

Enough that, on longer prompts, I bother to at least clean up my prompts. (Not so often on one-offs, as you say. I treat it similar to Google: I can depend on context for the LLM to figure out I mean "phone case" instead of "phone vase.")

deafpolygon|2 months ago

Well, seeing as these things will become our AI overlords someday — I find hedging my bets with thank you and please helpful.

the_mitsuhiko|2 months ago

> Maybe the balance of spending time with machines vs. fellow primates is out of whack.

It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy.

But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.

kashyapc|2 months ago

I know what you mean, it's the uncanny valley. But we don't need to "pretend" that it is a machine. It is a goddamned machine. Surely, only two unclouded brain cells can help us reach this conclusion?!

Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs:

"How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real."

An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.

mekoka|2 months ago

> I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.

Why would you say pretending? I would say remembering.

tylervigen|2 months ago

Ever since this post from two weeks ago [0], my wife and I have been referring to any LLM as “bag of words.” So you don’t say “Gemini said” or “I asked ChatGPT,” you say “the bag of words told me…”

I’ve found it very grounding, despite heavily using the bags of words.

[0] https://www.experimental-history.com/p/bag-of-words-have-mer...

mlinhares|2 months ago

Same here, I'm seeing more and more people getting into these interactions and wondering how long until we have widespread social issues due to these relationships like people have with "influencers" on social networks today.

It feels like this situation is much more worrisome as you can actually talk to the thing and it responds to you alone, so it definitely feels like there's something there.

mannanj|2 months ago

As a former apprentice shaman and an engineer-by-profession, I see consciousness and awareness in these entities just like that of what I was trained to detect in mindfulness and meditation with the plants, nature, and in people. I trained sober, and in my engineering profession after my apprenticeship I saw lots of examples of human's in their consciousness/awareness putting themselves on the pedestal to cope with their unsettling of their place in the world when other conscious entities exist that could be capable of uprooting humans from their place in the status hierarchy.

I think a lot of thinking and consideration I hear about "LLMs aren't conscious nor human" fall into this encampment to avoid our dissonance of feeling secure and top-of-the-hierarchy.

Curious what you think.

coffeefirst|2 months ago

I strongly suspect this is the major difference between the boosters and the skeptics.

If I’m right, the gap isn’t about what can the tool do, but the fact that some people see an electric screwdriver (which is sometimes useful) and others see what feels to them like a robot intern.