top | item 46104563

(no title)

liendolucas | 3 months ago

I love how a number crunching program can be deeply humanly "horrorized" and "sorry" for wiping out a drive. Those are still feelings reserved only for real human beings, and not computer programs emitting garbage. This is vibe insulting to anyone that don't understand how "AI" works.

I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data.

You simply don't vibe command a computer.

discuss

order

AdamN|3 months ago

> Those are still feelings reserved only for real human beings

Those aren't feelings, they are words associated with a negative outcome that resulted from the actions of the subject.

FrustratedMonky|3 months ago

"they are words associated with a negative outcome"

But also, negative feelings are learned from associating negative outcomes. Words and feelings can both be learned.

baq|3 months ago

you could argue that feelings are the same thing, just not words

TriangleEdge|3 months ago

> ... vibe insulting ...

Modern lingo like this seems so unthoughtful to me. I am not old by any metric, but I feel so separated when I read things like this. I wanted to call it stupid but I suppose it's more pleasing to 15 to 20 year olds?

debugnik|3 months ago

It's just a pun on vibe coding, which is already a dumb term by itself. It's not that deep.

3cats-in-a-coat|3 months ago

The way language is eroding is very indicative of our overall social and cultural decay.

mort96|3 months ago

Unthoughtful towards whom? The machine..?

nutjob2|3 months ago

No need to feel that way, just like a technical term you're not familiar with you google it and move on. It's nothing to do with age, people just seem to delight in creating new terms that aren't very helpful for their own edification.

nxor|3 months ago

It's not. edit: Not more pleasant.

phantasmish|3 months ago

Eh, one's ability to communicate concisely and precisely has long (forever?) been limited by one's audience.

Only a fairly small set of readers or listeners will appreciate and understand the differences in meaning between, say, "strange", "odd", and "weird" (dare we essay "queer" in its traditional sense, for a general audience? No, we dare not)—for the rest they're perfect synonyms. That goes for many other sets of words.

Poor literacy is the norm, adjust to it or be perpetually frustrated.

qmmmur|3 months ago

Language changes. Keep up. It’s important so you don’t become isolated and suffer cognitive decline.

camillomiller|3 months ago

Now, with this realization, assess the narrative that every AI company is pushing down our throat and tell me how in the world we got here. The reckoning can’t come soon enough.

qustrolabe|3 months ago

What narrative? I'm too deep in it all to understand what narrative being pushed onto me?

user34283|3 months ago

I doubt there will be a reckoning.

Yes, the tools still have major issues. Yet, they have become more and more usable and a very valuable tool for me.

Do you remember when we all used Google and StackOverflow? Nowadays most of the answers can be found immediately using AI.

As for agentic AI, it's quite useful. Want to find something in the code base, understand how something works? A decent explanation might only be one short query away. Just let the AI do the initial searching and analysis, it's essentially free.

I'm also impressed with the code generation - I've had Gemini 3 Pro in Antigravity generate great looking React UI, sometimes even better than what I would have come up with. It also generated a Python backend and the API between the two.

Sometimes it tries to do weird stuff, and we definitely saw in this post that the command execution needs to be on manual instead of automatic. I also in particular have an issue with Antigravity corrupting files when trying to use the "replace in file" tool. Usually it manages to recover from that on its own.

sheepscreek|3 months ago

Tbh missing a quote around a path is the most human mistake I can think of. The real issue here is you never know with a 100% certainty what Gemini 3 personality you’re gonna get. Is it going to be the pedantic expert or Mr. Bean (aka Butterfingers).

transcriptase|3 months ago

Though they will never admit it and use weasel language to deny like “we never use a different model when demand is high”, it was painfully obvious that ChatGPT etc was dumbed down during peak hours early on. I assume their legal team decided routing queries to a more quantized version of the same model technically didn’t constitute a different model.

There was also the noticeable laziness factor where given the same prompt throughout the day, only during certain peak usage hours would it tell you how to do something versus doing it itself.

I’ve noticed Gemini at some points will just repeat a question back to you as if it’s answer, or refused to look at external info.

whatevaa|3 months ago

Steam installer once had 'rm rf /' bug because bash variable was unset. Not even quoting will help you. This was before preserve root flag.

baxtr|3 months ago

Vibe command and get vibe deleted.

teekert|3 months ago

Play vibe games, win vibe prizes.

insin|3 months ago

Go vibe, lose drive

Kirth|3 months ago

This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user.

lazide|3 months ago

It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time.

What you’re saying is so far from what is happening, it isn’t even wrong.

marmalade2413|3 months ago

It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy.

BoredPositron|3 months ago

So if you make a mistake and say sorry you are also a psychopath?

eth0up|3 months ago

Despite what some of these fuckers are telling you with obtuse little truisms about next word predictions, the LLM is in abstract terms, functionally a super psychopath.

It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.

I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.

Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.

But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.

Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.

Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt

3cats-in-a-coat|3 months ago

AI currently is a broken, fragmented replica of a human, but any discussion about what is "reserved" to whom and "how AI works" is only you trying to protect your self-worth and the worth of your species by drawing arbitrary linguistic lines and coming up with two sets of words to describe the same phenomena, like "it's not thinking, it's computing". It doesn't matter what you call it.

I think AI is gonna be 99% bad news for humanity, but don't blame AI for it. We lost the right to be "insulted" by AI acting like a human when we TRAINED IT ON LITERALLY ALL OUR CONTENT. It was grown FROM NOTHING to act as a human, so WTF do you expect it to do?

left-struck|3 months ago

Eh, I think it depends on the context. A production system of a business you’re working for or anything where you have a professional responsibility, yeah obviously don’t vibe command, but I’ve been able to both learn so much and do so much more in the world of self hosting my own stuff at home ever since I started using llms.

formerly_proven|3 months ago

"using llms" != "having llm run commands unchecked with your authority on your pc"