top | item 47021782

(no title)

bdavbdav | 14 days ago

They may be preferred, but in a lot of cases they’re pretty terrible.

I had a bit of a heated debate with ChatGPT about the best way to restore a broken strange mdadm setup. It was very confidently wrong, and battled its point until I posted terminal output.

Sometimes I feel it’s learnt from the more belligerent side of OSS maintenance!

discuss

order

VorpalWay|14 days ago

Why would you bother arguing with an LLM? If you know the answer, just walk away and have a better day. It is not like it will learn from your interaction.

vladvasiliu|14 days ago

Maybe GP knew the proposed solution couldn't have worked, without knowing the actual solution?

dessimus|14 days ago

The Gell-Mann effect? If you can't trust LLM to assist with troubleshooting in the domain one is very familiar (mdadm), then why trust it in another that one is less familiar such as zfs or k8s?

bdavbdav|13 days ago

Because I wasn’t 100% sure of the solution myself, and wanted to talk through how to actually implement the theory of what I wanted to do. I knew that what it was suggesting was 100% wrong, but not of the best path.

DevDesmond|14 days ago

Arguing with an LLM is silly because you’re dealing with two adversarial effects at once:

- As the context window grows the LLM will become less intelligent [1] - Once your conversation takes a bad turn, you have effectively “poisoned” the context window, and are asking an algorithm to predict the likely continuation of text that is itself incorrect [2]. (It emulating the “belligerent side of OSS maintenance” is probably quite true!)

If you detect or suspect misunderstanding from an LLM, it is almost always best to remove the inaccuracies and try again. (You could, for example, ask your question again in a new chat, but include your terminal output + clarifications to get ahead of the misunderstanding, similar to how you might ask a fresh Stack Overflow question).

(It’s also a lot less fun to argue with an LLM, because there’s no audience like there is in the comments section with which to validate your rhetorical superiority!)

1 - https://news.ycombinator.com/item?id=44564248 2 - https://news.ycombinator.com/item?id=43991256

bdavbdav|13 days ago

I knew roughly the right path, and wanted guidance on that (cli guidance specifically). It was refusing to do so saying it wouldn’t work! It did work…:

michaelt|14 days ago

> It was very confidently wrong, and battled its point

The "good" news is a lot of newer LLMs are grovelling, obsequious yes-men.