(no title)
twayt | 2 years ago
No, actually they probably can’t. There is no verifiable way to remove the data from the model apart from completely removing all instances of information from the training data. The project you linked only describes a selective finetuning approach.
xnx|2 years ago
twayt|2 years ago
At most, these efforts will amount to data laundering where it will be impossible to prove that a piece of data was used to train the model, not provide conclusive proof that it was removed.
NBJack|2 years ago
brucethemoose2|2 years ago
... But yeah, fundamentally the only way to throw out the books is to throw out the weights.