I don't think he's doing that at all. The article is pointing out to non-technical people how AI is different than traditional software. I'm not sure how you think it's giving AI a break, as it's pointing out that it is essentially impossible to reason about. And it's not at the expense of regular developers because it's showing how regular software development is different than this. It makes two buckets, and puts AI in one and non-AI in the other.
alganet|4 months ago
The fact is, we kind of know how to prevent problems in AI systems:
- Good benchmarks. People said several times that LLMs display erratic behavior that could be prevented. Instead of adjusting the benchmarks (which would slow down development), they ignored the issues.
- Accountability frameworks. Who is responsible when an AI fails? How the company responsible for the model is going to make up for it? That was a demand from the very beginning. There are no such accountability systems in place. It's a clown fiesta.
- Slowing down. If you have a buggy product, you don't scale it. First, you try to understand the problem. This was the opposite of what happened, and at the time, they lied that scaling would solve the issues (when in fact many people knew for a fact that scaling wouldn't solve shit).
Yes, it's kind of different. But it's a different we already know. Stop pushing this idea that this stuff is completely new.
SalientBlue|4 months ago
'we' is the operative word here. 'We', meaning technical people who have followed this stuff for years. The target audience of this article are not part of this 'we' and this stuff IS completely new _for them_. The target audience are people who, when confronted with a problem with an LLM, think it is perfectly reasonable to just tell someone to 'look at the code' and 'fix the bug'. You are not the target audience and you are arguing something entirely different.