(no title)
idop | 10 days ago
The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.
idop | 10 days ago
The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.
jbstack|10 days ago
I think many of the criticisms of LLMs come from shallow use of it. People just say "write some documentation" and then aren't happy with the result. But in many cases, you can fix the things you don't like with more precise prompting. You can also iterate a few rounds to improve the output instead of just accepting the first answer. I'm not saying LLMs are flawless. Just that there's a middle ground between "the documentation it produced was terrible" and "the documentation it produced was exactly how I would have written it".
fxwin|10 days ago
orwin|10 days ago
Honestly I just don't read documentation three of my coworkers put on anymore (33% of my team). I already spend way to much time fixing the small coding issues I find in their PRs to also read their tests and doc. It's not their fault, some of them are pretty new, the other always took time to understand stuff and their children de output always was below average in quality in general (their people/soft skills are great, and they have other qualities that balance the team).
fastasucan|10 days ago
idop|10 days ago
tahigichigi|10 days ago
Most people drop a one line prompt like "write amazing article on climate change. make no mistakes" and wonder why it's unreadable.
Just like writing manually, it's an iterative approach and you're not gonna get it right the first, second or third time. But over time you'll get how the model thinks.
The irony is that people talk about being lazy for using LLMs but they're too lazy to even write a detailed prompt.
fxwin|10 days ago
That's without even mentioning the personal advantages you get from distilling notes, structuring and writing things yourself, which you get even if nobody ever reads what you write.