(no title)
ticulatedspline | 6 days ago
Honestly I'm not sure where the garbage-in/garbage-out line is with AIs like this. Can no chat-bot be a success unless it can handle literally every asinine or deliberately malicious thing humans throw at it?
ticulatedspline | 6 days ago
Honestly I'm not sure where the garbage-in/garbage-out line is with AIs like this. Can no chat-bot be a success unless it can handle literally every asinine or deliberately malicious thing humans throw at it?
happytoexplain|6 days ago
Further, if the information is important (nutrition) and you add liability to the mix (safety and health), you're multiplying how inappropriate it is to use LLMs for the job.
zahlman|6 days ago
Okay, but the question asked was objectively nothing to do with nutrition whatsoever.
bubblewand|6 days ago
vharuck|6 days ago
ticulatedspline|6 days ago
If you engage the product with good intent does it provide good value? If the advice is actually sound and it helps people engage conversations about diet then it would have positive value.
I guess what I'm getting at is "I spent my evening gaslighting an LLM to give me a recipe for gravel soup" is about as interesting as "I stuck my dick in the blender and it hurt so we should not have blenders"
I'd rather see an honest review of use as intended to see if it produces harmful output, going absurdist just covers up legitimate complaints with clickbait.
a_better_world|6 days ago
kelseyfrog|6 days ago
woodruffw|6 days ago
You're right that the bot can't possibly do the right thing in all possible scenarios here, which makes it clear that the bot's only actual purpose is to enable self-dealing, not be of value to the public.
jjk166|6 days ago
DSMan195276|6 days ago
In other words you'd be pretty surprised if a real person in this context gave an answer even remotely close to what this chat bot gave. You can't expect a general person to know when the chat bot isn't giving back good information just because they asked something outside the norm.