(no title)
PeterHolzwarth | 1 month ago
This seems like a web problem, not a ChatGPT issue specifically.
I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.
Is there an angle here I am not picking up on, do you think?
toofy|1 month ago
these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?
give us all of the money, but also never trust our product.
our product will replace humans in your company, also, our product is dumb af.
subscribe to us because our product has all the answers, fast. also, never trust those answers.
ninjin|1 month ago
If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".
With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.
Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.
wat10000|1 month ago
Animats|1 month ago
The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.
Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.
stvltvs|1 month ago
falkensmaize|1 month ago
So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.
squigz|1 month ago
anonzzzies|1 month ago
WalterBright|1 month ago
My trust in what the experts say has declined drastically over the last 10 years.
xyzzy123|1 month ago
I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".
PeterHolzwarth|1 month ago