(no title)
fabian2k | 1 month ago
Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.
fransje26|1 month ago
Sorry that does not compute.
It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.
Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..
fabian2k|1 month ago
But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.
voidUpdate|1 month ago
repelsteeltje|1 month ago
I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.
In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.
In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.
concinds|1 month ago
vel0city|1 month ago