top | item 38946449

(no title)

ryanklee | 2 years ago

What matters is whether the suggestions are actually good, not where they came from.

discuss

order

monooso|2 years ago

The author makes it clear in the associated discussion that he doesn't immediately assume the suggestions are bad based on their source:

> I take the time to understand and consider each suggestion, not rejecting anything out of hand, and share them with my team members (of which there are 17).

The issue is the time it takes to explain _why_ these are bad ideas to non-technical (and skeptical) colleagues.

ryanklee|2 years ago

That sounds to me less of an issue with ChatGPT, and more with having colleagues that don't understand how to engage in reasonable discussion or evaluate information correctly.

x86x87|2 years ago

brandolini's law

rideontime|2 years ago

The issue here is that the artists are confident that the suggestions (which they don't themselves understand) are good because they came from ChatGPT.

giancarlostoro|2 years ago

I would argue that if you cannot answer questions on something generated before suggesting it or handing it off, you absolutely should not be pushing for it. Do the research necessary to feel fully confident on a hand-off.

caller9|2 years ago

They aren't good. Also, diffusion models work well for the artists to spit out pixels. The artists assume the LLM generated code is the same quality and that the OP is a fool who won't do what they ask due to lack of skill or stubbornness.

It's messing up the dynamic where creatives come up with blue sky stuff and developers come to a compromise on a possible solution. Now you have this AI model hallucinating plausible, but fake solutions.

The model says what they want to hear because it is a chicken, not a pig in this scenario.

ren_engineer|2 years ago

the issue is it takes effort to determine if the idea is good or finding subtle errors in generated code, generating it with GPT requires almost no effort from the person who then offloads it

notjoemama|2 years ago

My gut reaction is no, but let me think out loud using ad absurdum to see if there is merit.

A five year old asks chatGPT how to achieve world peace. The response is not only possible but easily affordable on a short timeline. Do I care that a 5 year old got it from ChatGPT? I guess not. A part of me would want all the adults on the planet to stop and acknowledge how tragically ineffectual they are, but as far as whether the source matters...nope.

Good point. Thanks for letting me think through it.

quickthrower2|2 years ago

Where they came from is part of how you decide if they are good. Especially for nuanced knowledge work.

If not, I have a pacemaker I would like to sell you.

renewiltord|2 years ago

Yeah, but it appears this guy can't convince the suggestion givers because they lack the expertise to evaluate their suggestions. For instance, they could be asking ChatGPT how to do something in code and then sending that.