top | item 28877871

(no title)

stuartbman | 4 years ago

One of the problems with closed models is that any model can be found to train on the 'wrong' data point. So e.g. a chest x ray reader determines that images taken with the machine in ICU indicate sicker patients than elsewhere- that's not useful. If you can't inspect the model to check that, they might claim superior performance, but then the model doesn't work as well as advertised when it's tried out. Other biases might occur as well- for instance you can imagine a 'Greyball for healthcare' with the wrong incentives which recommends a certain drug/therapy more often than it should.

discuss

order

ashtonkem|4 years ago

One of my more radical opinions in this area is the idea that it should be illegal to sell a closed and proprietary ML model for areas of public safety, specifically in hospitals and in courts/jails. The public’s interest in transparency in such matters trumps the company’s copyrights. Trained experts get a chance to inspect every drug and every medical device that’s used; why shouldn’t they get to see how a ML model used in a hospital was trained?

stuartbman|4 years ago

Completely agree. I've not seen it tested legally, but the EU now has a 'right to explanation' where automated decisions are made about people. This would prohibit closed ML from most arenas.