top | item 46731468

(no title)

monegator | 1 month ago

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

discuss

order

Sharlin|1 month ago

Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.

pera|1 month ago

Last year I had to deal with a contractor who sincerely believed that a very popular library had some issue because it was erroring when parsing a chatgpt generated json... I'm still shocked, this is seriously scary

Suzuran|1 month ago

My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.

tveita|1 month ago

I think people's attitude would be better calibrated to reality if LLM providers were legally required to call their service "a random drunk guy on the subway"

E.g.

"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"

anon_anon12|1 month ago

People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.

Cthulhu_|1 month ago

I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.

pjc50|1 month ago

Billions of dollars of marketing have been spent to enable them to believe that, in order to justify the trillions of investment. Why would you invest a trillion dollars in a machine that occasionally randomly gave wrong answers?

pousada|1 month ago

I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

Unless I have been reading very different science fiction I think it’s definitely not that.

I think it’s more the confidence and seeming plausibility of LLM answers

johnnyanmac|1 month ago

This is probably more of a GAI achievement, but we definitely need confidence levels when it comes to making queries with factual responses.

But yes, look at the US c.2025-6. As long as the leader sounds assertive, some people will eat the blatant lies that can be disproven even by the same AI tools they laud.

TeMPOraL|1 month ago

This sounds a bit like the "Asking vs. Guessing culture" discussion on the front page yesterday. With the "Guesser" being GP who's front-loading extra investigation, debugging and maintenance work so the project maintainers don't have to do it, and with the "Asker" being the client from your example, pasting the submission to ChatGPT and forwarding its response.

slfreference|1 month ago

>> In Guess Culture, you avoid putting a request into words unless you're pretty sure the answer will be yes. Guess Culture depends on a tight net of shared expectations. A key skill is putting out delicate feelers. If you do this with enough subtlety, you won't even have to make the request directly; you'll get an offer. Even then, the offer may be genuine or pro forma; it takes yet more skill and delicacy to discern whether you should accept.

delicate feelers is like octopus arms

ncruces|1 month ago

I've also had the opposite.

I raise an issue or PR after carefully reviewing someone else's open source code.

They ask Claude to answer me; neither them nor Claude understood the issue.

Well, at least it's their repo, they can do whatever.

monooso|1 month ago

Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.

monegator|1 month ago

I consider them to be the same attitude. Machine made it / Machine said it. It must be right, you must be wrong.

They are sure they know better because they get a yes man doing their job for them.

meindnoch|1 month ago

Our CEO chiming in on a technical discussion between engineers: by the way, this is what Claude says: *some completely made-up bullshit*

pixl97|1 month ago

I do want to counter that in the past before AI, the CEO would just chime in with some completely off the wall bullshit from a consultant.

javcasas|1 month ago

Hi CEO, thanks for the input. Next time that we have a discussion, we will ask Claude instead of discussing with who wrote the offending code.

positive-spite|1 month ago

Didn't happen to me yet.

I'm not looking forward to it...

Aeolun|1 month ago

Random people don’t do this. Your boss however…