Ever since Google experimented LLM in Gmail it bothers me alot. I firmly believe every word and the way you put them together portrays who you are. Using LLM for direct communication is harmful to human connections.
It can be. It can also not be. A friend of mine had a PITA boss. Thanks to ChatGPT he salvaged his relationship with him even though he hated working with him.
He went on to something else but his stress levels went way down.
All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.
It sounds like that relationship was not supposed to be salvaged to begin with. ChatGPT perhaps prolonged your friend's suffering, who ended up moving on in the end. Perhaps unnecessarily delayed.
IMHO, the real problem is that they create an even greater dissonance between online life and IRL.
Think about dating apps, pictures could be fake, and now words exchanged can be fake too.
You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.
This comment has made me glad for LLM in Gmail. If someone is going to over analyze my every word because he firmly believes it portrays who I am, I'd appreciate the layer obfuscation between me and this creepazoid.
mettamage|29 days ago
He went on to something else but his stress levels went way down.
All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.
lofties|29 days ago
gjadi|28 days ago
Think about dating apps, pictures could be fake, and now words exchanged can be fake too.
You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.
liveoneggs|29 days ago
nullsanity|29 days ago
flyinglizard|29 days ago
johnfn|29 days ago
3371|24 days ago