top | item 35262319

(no title)

cpmpcpmp | 2 years ago

If you're concerned about data leakage, it's worth noting that model weights can very easily be used to reconstruct the original data that it was trained on: so it could be misleading to claim that user data isn't being shared over the network. To avoid this, you'd need to look into techniques like Secure Aggregation or local differential privacy. Flower does provide some of this, FWIW.

discuss

order

onethought|2 years ago

This doesn’t sound right, if they don’t know the structure of the NN how can the reconstruct from the weights alone? (Perhaps the structure is communicated within the weights?)

aix1|2 years ago

Every agent training the model on their proprietary data has to have access to the model form in some way (otherwise how would they train it?)

For this reason, one must assume that the model form is known to the adversary.

With this, the question becomes: is it possible to reconstruct training data from a trained model? We already know that, at least for some image models, the answer to that question is "yes": https://arxiv.org/pdf/2301.13188.pdf