top | item 46207007

(no title)

hotsauceror | 2 months ago

I agree with this sentiment.

When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

discuss

order

ndsipa_pomu|2 months ago

To my mind, it's like someone saying "I asked Fred down at the pub and he said...". It's someone stupidly repeating something that's likely stupid anyway.

giancarlostoro|2 months ago

You can have the same problem with Googling things, LLMs usually form conclusions I align with when I do the independent research. Google isn't anywhere near as good as it was 5 years ago. All the years of crippling their search ranking system and suppressing results has caught up to them to the point most LLMs are Google replacements.

JeremyNT|2 months ago

In a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."

It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.

(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)

I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.

dogleash|2 months ago

> can actually be pretty useful. It indicates somebody already minimally investigated a thing

Every time this happens to me at work one of two things happens:

1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.

2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.

lanstin|2 months ago

Yeah if the person doing it is smart I would trust they had the reasonable prompt and ruled out flagrant BS answers. Sometimes the key thing is just to know the name of the thing for the answer. It's equally as good/annoying as reporting what Google search gives for the answer. I guess I assume mostly people will do the AI query/search and then decide to share the answer based on how good or useful it seems.

mikkupikku|2 months ago

These days, most people who try googling for answers end up reading an article which was generated by AI anyway. At least if you go right to the bot, you know what you're getting.

MetaWhirledPeas|2 months ago

> When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.

If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.

dogleash|2 months ago

I am amused by the defeatism in your response that expecting anyone to actually try anymore is a lost cause.

KaiserPro|2 months ago

"lets ask the dipshit" is how my colleague phrases it

gardenhedge|2 months ago

I disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted

OptionOfT|2 months ago

But I'm not interested in the AI's point of view. I have done that myself.

I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.

Your value is not in copy-pasting. It's in your experience.

JoshTriplett|2 months ago

If I wanted to consult an AI, I'd consult an AI. "I consulted an AI and pasted in its answer" is worse than worthless. "I consulted an AI and carefully checked the result" might have value.