(no title)
alpha_squared | 1 month ago
I'm working directly with these tools and have several colleagues who do as well. Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it, but it's just not materializing that much in my day-to-day outside of creating the most basic code/scaffolding where I then have to go back and fix/correct because there are subtle errors. It's actually hard to tell if my productivity is better because I have to spend time fixing the generated output.
Maybe it would help to recognize that your experience is not the norm. And if the tech were there, where are the actual profits from selling it? It's increasingly more common for it to be "under development" for selling to consumers or only deployed as a chatbot in scenarios where it's acceptable to be wrong and warnings to verify output yourself.
whimsicalism|1 month ago
If my other replies come off as aggro, I apologize - I definitely can struggle with moderating tone in comments to reflect how I actually feel.
> Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it
Let me be clear - I am not so completely sold on the current iteration. But I think there has been a significant improvement even since the midpoint of last year, the number of diffs I am returning mostly unedited is sharply increasing, and many people I am talking to are privately telling me they are no longer authoring any code themselves except for minor edits in diffs. Given that this has only been 3 years since chatgpt, really I am just looking at the curve and saying ‘woah.’
kalkin|1 month ago
It's unfortunately the case that even understanding what AI can and cannot do has become a matter of, as you say, "ideological world view". Ideally we'd be able to discuss what's factually true of AI at the beginning of 2026, and what's likely to be true within the next few years, independently of whether the trends are good for most humans or what we ought to do about them. In practice that's become pretty difficult, and the article to which we're all responding does not contribute positively.
johnnyanmac|1 month ago
Frequency is important too.
>independently of whether the trends are good for most humans or what we ought to do about them.
This whole article is about the trends and of they are good for humans. I was pleasantly surprised that this was not yet another argument of "AI is (not) good enough" since people at this point have their fences set on that. I don't think it's too late to talk about how we as humans can manage pandora's box before it opens.
Responses like this dismissing the human element seem to want to isolate themselves from societal effects for some reason. The box will affect you.