top | item 46529446

(no title)

okwhateverdude | 1 month ago

> However, for some reason, both Gemini and ChatGPT tend to argue with me

The trick here is: "Be succinct. No commentary."

And sometimes a healthy dose of expressing frustration or anger (cursing, berating, threatening) also gets them to STFU and do the thing. As in literally: "I don't give a fuck about your stupid fucking opinions on the matter. Do it exactly as I specified"

Also generally the very first time it expresses any of that weird shit, your context is toast. So even correcting it is reinforcing. Just regenerate the response.

discuss

order

CamperBob2|1 month ago

And sometimes a healthy dose of expressing frustration or anger (cursing, berating, threatening) also gets them to STFU and do the thing. As in literally: "I don't give a fuck about your stupid fucking opinions on the matter. Do it exactly as I specified"

Last time I bawled out an LLM and forced it to change its mind, I later realized that the LLM was right the first time.

One of those "Who am I and how did I end up in this hole in the ground, and where did all these carrots and brightly-colored eggs come from?" moments, of the sort that seem to be coming more and more frequently lately.

Aerbil313|1 month ago

Yeah, same. Lately almost every time I think "Oh no way, this is not the correct way/not the optimal way/it's a hallucination" it later turns out that it's actually the correct way/the optimal way/it's not a hallucination. I now think twice before doing anything differently than what the LLM tells me unless I'm an expert on the subject and can already spot mistakes easily.

It seems like they really figured out grounding and the like in the last couple of months.