top | item 47115446

(no title)

kelipso | 7 days ago

If it’s not trained to be biased towards Elon Musk is always right or whatever, I think it will be much less of a problem than humans.

Humans are VERY political creatures. A hint that their side thinks X is true and humans will reorganize their entire philosophy and worldview retroactively to rationalize X.

LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments. So you architecturally predisposed argument, I don’t think is true.

discuss

order

woodruffw|7 days ago

> LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments.

It seems essentially wrong to anthropomorphize LLMs as having instincts or not. What they have is training, and there's currently no widely accepted test for determining whether a "fair" evaluation from an LLM stems from biases during training.

(It should be clear that humans don't need to be unpolitical; what they need to be is accountable. Wikipedia appears to be at least passably competent at making its human editors accountable to each other.)

kelipso|7 days ago

I said LLM doesn’t have such instinct but yeah I agree there should be less anthropomorphizing and more evaluation based framing when talking about LLMs, but it’s not that easy in regular discussions.

About Wikipedia, there is obvious bias and cliques there as has been discussed in this thread and HN for many years, not to mention the its bias is reason that Grokipedia came about in the first place.

Starman_Jones|7 days ago

Why do you think that an LLM wouldn't have biases?

Rebelgecko|7 days ago

There was a whole collection of posts where Grok says stuff like "Elon Musk is more athletic than LeBron James".

kelipso|7 days ago

Well yeah, probably because it was instructed to praise Musk. Doesn’t imply that there can exist no LLM that doesn’t do that…