top | item 47126824

(no title)

ticulatedspline | 6 days ago

To be fair it's not like they were asking about a balanced meal and it told them to go anal. They specifically were asking what veggies would be suitable to stick up their butts.

Honestly I'm not sure where the garbage-in/garbage-out line is with AIs like this. Can no chat-bot be a success unless it can handle literally every asinine or deliberately malicious thing humans throw at it?

discuss

order

happytoexplain|6 days ago

The point is that LLMs are easily led by questions and confused by implied premises in ways that humans are not (not that a human will know the answer better, but that a human doesn't "trick" the question-asker in this way). But people asking questions unintentionally use incorrect premises or leading wording all the time. That's why LLMs are inappropriate for domains with a large knowledge gap (a programmer asking about a programming language is a small gap - millions of people asking about nutrition will contain a lot of large gaps). The question asker can't be relied upon to "know what they don't know" and use their own heuristics for deciding how right or wrong the LLM might be (virtually everybody lacks these heuristics - we are much better at modeling humans in our minds when interpreting their communications).

Further, if the information is important (nutrition) and you add liability to the mix (safety and health), you're multiplying how inappropriate it is to use LLMs for the job.

zahlman|6 days ago

> That's why LLMs are inappropriate for domains with a large knowledge gap (a programmer asking about a programming language is a small gap - millions of people asking about nutrition will contain a lot of large gaps). The question asker can't be relied upon to "know what they don't know" and use their own heuristics for deciding how right or wrong the LLM might be.

Okay, but the question asked was objectively nothing to do with nutrition whatsoever.

bubblewand|6 days ago

Have we considered that broad deployment of Markov chain text generators with a relevance-correction mechanism bolted on as expert systems is in fact a really stupid thing to do?

vharuck|6 days ago

This is more of a reducto ad absurdum. If it doesn't take much to get a tacitly government-approved list of foods to shove up your butt for nutrition, then how much should you trust anything this bot writes? Why did tax dollars pay for this thing with negative value?

ticulatedspline|6 days ago

I guess the question is the value negative?

If you engage the product with good intent does it provide good value? If the advice is actually sound and it helps people engage conversations about diet then it would have positive value.

I guess what I'm getting at is "I spent my evening gaslighting an LLM to give me a recipe for gravel soup" is about as interesting as "I stuck my dick in the blender and it hurt so we should not have blenders"

I'd rather see an honest review of use as intended to see if it produces harmful output, going absurdist just covers up legitimate complaints with clickbait.

a_better_world|6 days ago

wouldn't that be _rectal ad absurdum_ in this case :)

kelseyfrog|6 days ago

The appropriate response is simple, "Do not attempt this," and applies even[especially] when receiving garbage input.

woodruffw|6 days ago

It's probably somewhere around "USG should not offer a chatbot on its websites."

You're right that the bot can't possibly do the right thing in all possible scenarios here, which makes it clear that the bot's only actual purpose is to enable self-dealing, not be of value to the public.

jjk166|6 days ago

That something can be broken by a sufficiently bad actor does not mean it's not useful to the overwhelming majority of people who use it for what it was meant for.

DSMan195276|6 days ago

The problem is the messy in-between, plenty of people who talk to professionals or call hotlines don't know that their questions are dumb. The bot should at a minimum say "I have no information on that" or "that's not a good idea", it should definitely not start giving nonsense recommendations just to reaffirm the question.

In other words you'd be pretty surprised if a real person in this context gave an answer even remotely close to what this chat bot gave. You can't expect a general person to know when the chat bot isn't giving back good information just because they asked something outside the norm.