(no title)
liendolucas | 3 months ago
I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data.
You simply don't vibe command a computer.
AdamN|3 months ago
Those aren't feelings, they are words associated with a negative outcome that resulted from the actions of the subject.
FrustratedMonky|3 months ago
But also, negative feelings are learned from associating negative outcomes. Words and feelings can both be learned.
baq|3 months ago
TriangleEdge|3 months ago
Modern lingo like this seems so unthoughtful to me. I am not old by any metric, but I feel so separated when I read things like this. I wanted to call it stupid but I suppose it's more pleasing to 15 to 20 year olds?
debugnik|3 months ago
3cats-in-a-coat|3 months ago
mort96|3 months ago
nutjob2|3 months ago
nxor|3 months ago
phantasmish|3 months ago
Only a fairly small set of readers or listeners will appreciate and understand the differences in meaning between, say, "strange", "odd", and "weird" (dare we essay "queer" in its traditional sense, for a general audience? No, we dare not)—for the rest they're perfect synonyms. That goes for many other sets of words.
Poor literacy is the norm, adjust to it or be perpetually frustrated.
qmmmur|3 months ago
camillomiller|3 months ago
qustrolabe|3 months ago
user34283|3 months ago
Yes, the tools still have major issues. Yet, they have become more and more usable and a very valuable tool for me.
Do you remember when we all used Google and StackOverflow? Nowadays most of the answers can be found immediately using AI.
As for agentic AI, it's quite useful. Want to find something in the code base, understand how something works? A decent explanation might only be one short query away. Just let the AI do the initial searching and analysis, it's essentially free.
I'm also impressed with the code generation - I've had Gemini 3 Pro in Antigravity generate great looking React UI, sometimes even better than what I would have come up with. It also generated a Python backend and the API between the two.
Sometimes it tries to do weird stuff, and we definitely saw in this post that the command execution needs to be on manual instead of automatic. I also in particular have an issue with Antigravity corrupting files when trying to use the "replace in file" tool. Usually it manages to recover from that on its own.
sheepscreek|3 months ago
transcriptase|3 months ago
There was also the noticeable laziness factor where given the same prompt throughout the day, only during certain peak usage hours would it tell you how to do something versus doing it itself.
I’ve noticed Gemini at some points will just repeat a question back to you as if it’s answer, or refused to look at external info.
whatevaa|3 months ago
baxtr|3 months ago
teekert|3 months ago
insin|3 months ago
SmirkingRevenge|3 months ago
Kirth|3 months ago
lazide|3 months ago
What you’re saying is so far from what is happening, it isn’t even wrong.
marmalade2413|3 months ago
BoredPositron|3 months ago
eth0up|3 months ago
It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.
I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.
Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.
But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.
Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.
Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt
3cats-in-a-coat|3 months ago
I think AI is gonna be 99% bad news for humanity, but don't blame AI for it. We lost the right to be "insulted" by AI acting like a human when we TRAINED IT ON LITERALLY ALL OUR CONTENT. It was grown FROM NOTHING to act as a human, so WTF do you expect it to do?
left-struck|3 months ago
formerly_proven|3 months ago