One of the problems with closed models is that any model can be found to train on the 'wrong' data point. So e.g. a chest x ray reader determines that images taken with the machine in ICU indicate sicker patients than elsewhere- that's not useful. If you can't inspect the model to check that, they might claim superior performance, but then the model doesn't work as well as advertised when it's tried out. Other biases might occur as well- for instance you can imagine a 'Greyball for healthcare' with the wrong incentives which recommends a certain drug/therapy more often than it should.
ashtonkem|4 years ago
stuartbman|4 years ago