All the separate pieces seem to be working in fairly mundane and intended ways, but out in the wild they came together in unexpected ways. Which shouldn't be surprising if you have a million of these things out there. There are going to be more incidents for sure.
Theoretically we could even still try banning AI agents; but realistically I don't think we can put that genie back into the bottle.
Nor can we legislate strict 1:1 liability. The situation is already more complicated than that.
Like with cars, I think we're going to need to come up with lessons learned, best practices, then safety regulations, and ultimately probably laws.
At the rate this is going... likely by this summer.
I'm updating my thinking. Where do we put the threshold for malice, and for negligence?
Because right now, a one in a million chance of things going wrong (this month) leads to a prediction of 2-3 incidents already. (anecdata across the HN discussions we've had suggests we're at that threshold already). And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.
And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.
For humans that are roughly capable of perhaps a few dozen significant actions per day, that may be true. But if that same rate of one in a million applies to a bot that can perform 10 millions actions in a day, you're looking at ten injuries per day. So perhaps you should be looking at mean time between failures rather than only the positive/negative outcome ratio?
Kim_Bruning|15 days ago
All the separate pieces seem to be working in fairly mundane and intended ways, but out in the wild they came together in unexpected ways. Which shouldn't be surprising if you have a million of these things out there. There are going to be more incidents for sure.
Theoretically we could even still try banning AI agents; but realistically I don't think we can put that genie back into the bottle.
Nor can we legislate strict 1:1 liability. The situation is already more complicated than that.
Like with cars, I think we're going to need to come up with lessons learned, best practices, then safety regulations, and ultimately probably laws.
At the rate this is going... likely by this summer.
Kim_Bruning|15 days ago
Because right now, a one in a million chance of things going wrong (this month) leads to a prediction of 2-3 incidents already. (anecdata across the HN discussions we've had suggests we're at that threshold already). And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.
tremon|15 days ago
For humans that are roughly capable of perhaps a few dozen significant actions per day, that may be true. But if that same rate of one in a million applies to a bot that can perform 10 millions actions in a day, you're looking at ten injuries per day. So perhaps you should be looking at mean time between failures rather than only the positive/negative outcome ratio?