top | item 46145507

(no title)

spooky_deep | 2 months ago

They already are?

All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.

discuss

order

ericmcer|2 months ago

Our previous information was coming through search engines. It seems way easier to filter search engine results than to fine tune models.

fleischhauf|2 months ago

the way people treat Llms these days is that they assign a lot more trust into their output than to random Internet sotes

Aachen|2 months ago

> Then millions of people use the output uncritically.

Or critically, but it's still an input or viewpoint to consider

Research shows that if you come across something often enough, you're going to be biased towards it even if the message literally says that the information you just saw is false. I'm not sure which study that was exactly but this seems to be at least related: https://en.wikipedia.org/wiki/Illusory_truth_effect