I hear this argument applied often when people bring up the deficiencies of AI, and I don't find it convincing. Compare an AI coding assistant to reaching out to another engineer on my team as an example. If I know this engineer, I will likely have an idea of their relative skill level, their familiarity with the problem at hand, their propensity to suggest one type of solution over another, etc. People are pretty good at developing this kind of sense because we work with other people constantly. The AI assistant, on the other hand, is very much not like a human. I have a limited capacity to understand its "thought process," and I consider myself far more technical than the average person. This makes a verification step troublesome, because I don't know what to expect.This difference is even more stark when it comes to driving assistants. Video compilations of Teslas with FSD behaving erratically and most importantly, unpredictably, are all over the place. Experienced Tesla drivers seem to have some limited ability to predict the weaknesses of the FSD package, but the issue is that the driving assistant is so unlike a human. I've seen multiple examples of people saying "well, humans cause car crashes too," but the key difference is that I have to sit behind the wheel and deal with the fact that my driving assistant may or may not suddenly swerve into oncoming traffic. The reasons for it doing so are likely obscure to me, and this is a real problem.
No comments yet.