(no title)
a1j9o94 | 2 months ago
I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.
skwirl|2 months ago
The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."
mrisoli|2 months ago
I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.
dj_mc_merlin|2 months ago
poemxo|2 months ago
Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.
a1j9o94|2 months ago
In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.
amarant|2 months ago
It's all about the utility provided. That's the only thing that matters in the end.
Some people seem to think work is an exchange of suffering for money, and omg some colleagues are not suffering as much as they're supposed to!
The plan(or any other document) has to be judged on its own merits. Always. It doesn't matter how it was written. It really doesn't.
Does that mean AI usage can never be problematic? Of course not! If a colleague feeds their tasks to a LLM and never does anything to verify quality, and frequently submits poor quality documents for colleagues to verify and correct, that's obviously bad. But think about it: a colleague who submits poor quality work is problematic regardless of if they wrote it themselves or if they had an AI do it.
A good document is a good document. And a bad one is a bad one. Doesn't matter if it was written using vim, Emacs or Gemini 3
meowface|2 months ago
If it's fiction writing or otherwise an attempt at somewhat artful prose, having an LLM write for you isn't cool (both due to stolen valor and the lame, trite style all current LLMs output), but for relatively low-stakes white collar job tasks I think it's often fine or even an upgrade. Definitely not always, and even when it's "fine" the slopstyle can be grating, but overall it's not that bad. As the LLMs get smarter it'll be less and less of an issue.
mystifyingpoi|2 months ago
That's the thing. It actually really matters whether the ideas presented are coming from a coworker, or the ideas are coming from LLM.
I've seem way too many scenarios where I'm asking a coworker, if we should do X or Y, and all I get is a useless wall of spewed text, with a complete disregard to the project and circumstances on hand. I need YOUR input, from YOUR head right now. If I could ask Copilot I'd do that myself, thanks.
a1j9o94|2 months ago
If they answer your question with irrelevant context, then that's the problem, not that it was AI