top | item 46723842

(no title)

Draiken | 1 month ago

> Is it not reproducable? Someone up thread reproduced it and expanded on it. It worked for me the first time I prompted. Did you try it or are you just guessing that it's not reproducable because that's what you already think?

I assumed you knew how LLMs work. They are random by nature, not "because I'm guessing it". There's a reason if you ask the LLM the same exact prompt hundreds of times you'll get hundreds of different answers.

>I looked for, found, and shared evidence

Anecdotal evidence. Studies have shown how unreliable LLMs are exactly because they are not deterministic. Again, it's a fact, not an opinion.

>I'm talking about filtering spammy communication channels

So if we make tons of mistakes there, who cares, right?

I only used this as an example because it's one of the few very public uses of LLMs to make judgement calls where people accepted it as true and faced consequences.

I'm sure there are plenty more people getting screwed over by similar mistakes, but folks generally aren't stupid enough to say that publicly. Maybe the Salesforce huge mistake qualifies too? Incidentally it also involved people's jobs.

Regardless, the point stands: they are unreliable.

Want to trust LLMs blindly for your weekend project? Great! The only potential victim for its mistakes is you. For anything serious like a huge open source project? That's irresponsible.

discuss

order

No comments yet.