top | item 46750163

(no title)

bsoles | 1 month ago

The reliability of all AI tools with potentially severe consequences for people needs to be tested using adversarial patterns. This is nothing new, yet the mentioned article fails to do that. They test the happy paths and find the results to be satisfactory for themselves.

It is very common in academic investigations to achieve results with more than 95% accuracy, let alone 90%, when in the real world the same AI tools fail miserably.

So, yes, this is the nightmare scenario that I am afraid of where a simplistic "investigation" will be used to justify the use of unproven AI tools with real life consequences to people.

discuss

order

No comments yet.