top | item 31088607

(no title)

galcerte | 3 years ago

It sure looks like such models are going to have to undergo the same sort of scrutiny regular software does nowadays. No more closed-off and rationed access to the near-bleeding-edge.

discuss

order

joe_the_user|3 years ago

Well, this show ML models should receive the scrutiny regular software. But of course regular software often doesn't receive the scrutiny it ought to. And before this, people commented that ML was "the essence of technical debt".

With companies like Atlassian just going down and not coming, one wonders whether the concept of a technical Ponzi Scheme and technical collapse might be the next thing after technical and it seems like the fragile ML would more accelerate than stop such a scenario.

gmfawcett|3 years ago

Wouldn't they deserve far more scrutiny? I know how to review your source code, but how do I review your ML model?

galcerte|3 years ago

By reviewing the source code of the model, reviewing the training data, and reviewing weight initialization, but the latter should be specified in the source code. Also making it abundantly clear that the libraries used to make the model were not tampered with, maybe hashing their files or doing some reproducible builds wizardry...

Edit: Now that I think about it, can't data poisoning happen when predicting, rather than just happening in the training phase? In that case, it's going to be complicated to work around that.