If you're concerned about data leakage, it's worth noting that model weights can very easily be used to reconstruct the original data that it was trained on: so it could be misleading to claim that user data isn't being shared over the network. To avoid this, you'd need to look into techniques like Secure Aggregation or local differential privacy. Flower does provide some of this, FWIW.
onethought|2 years ago
aix1|2 years ago
For this reason, one must assume that the model form is known to the adversary.
With this, the question becomes: is it possible to reconstruct training data from a trained model? We already know that, at least for some image models, the answer to that question is "yes": https://arxiv.org/pdf/2301.13188.pdf