top | item 22248329

Using ‘radioactive data’ to detect if a data set was used for training

162 points| olibaw | 6 years ago |ai.facebook.com

26 comments

order

sillysaurusx|6 years ago

At first glance, this seems like one of the more interesting projects to come out of Facebook AI. Justification: In the future, AI models will increasingly become interwoven with tech. It's not going to be so much "AI programming" as just "programming".

That raises an interesting question – one that has bothered me for a long time: Who owns copyright on training data?

As we saw with Clearview AI, a lot of data is being used without consent or even knowledge of the creators. And it's extremely hard to detect this usage, let alone enforce rights on it.

I might be misunderstanding this work, but it seems like this would give you the ability to mark your digital data in such a way that you could prove it was later used in a model.

Unfortunately, it's not that simple. You don't have access to the models (normally). And I'm betting that this work is somehow domain-specific, meaning you can't really come up with a generalized marker to imprint on all your data.

But this implies you might be able to mark your data with many such markers, in hopes that one of them will later be triggered:

We also designed the radioactive data method so that it is extremely difficult to detect whether a data set is radioactive and to remove the marks from the trained model.

The flipside is interesting, too: This might give companies yet another way of tracking users. Now you can check whether a given user was in your model's actual training set, and if not, fine-tune the model on the fly.

Looking forward to seeing what comes of this.

SkyBelow|6 years ago

Is there any particular reason to think this won't become another cat and mouse escalation as training algorithms have built in protection against this (and other related training set manipulations, especially the poisoning one the article talked about)? That isn't to say it is useless, as most cat and mouse escalations prove to be quite useful as long as the mouse stays a little ahead of the cat.

In this case, wouldn't such a marker be able to be detected by looking at images of the same class and seeing if there are any common perturbation across them, adjusting the images by the common perturbation , and then training the neural network? Even if there isn't such a common perturbation across them, adjusting them by the false flag common perturbation generated shouldn't be any more destructive than this method would be.

If there was a way to make it dependent upon the initial image and the class, that would be much harder to detect, but would such a method be possible to detect since all images within a class would not have the common perturbation?

gambler|6 years ago

>Who owns copyright on training data?

Megacorps. Regardless of what the data is, who produced that data or when.

zzbn00|6 years ago

Not relevant to the main trust of the article but barium sulphate is not radioactive, it just efficiently absorbs X-rays. Radioactive markers are I believe most commonly used in PET scans, Wikipedia suggests flourine-18 as the common isotope used.

fareed79|6 years ago

you are very correct - authors are doing a fascinating and smart parallel, but the analogy (contrast X-RAY imaging) is wrong. A radioactive element (F-18) marking a glucose molecule for tracking sugar metabolism in the human body with an imager (PET) is what they mean. This is one of many techniques in the field of nuclear medicine or molecular imaging.

ISL|6 years ago

It is hard to believe that modifying input datasets won't modify the qualitative behavior of the outputs in some way.

This appears to be a modern variation of the https://en.wikipedia.org/wiki/Fictitious_entry / copy-trap behavior that mapmakers have made in the past.

rustyconover|6 years ago

I think most ML models aren’t very “lean”, meaning there is space in their weight layers for information isn’t directly attributable to predictive accuracy. That space is likely where this new “radioactive” like data is being “stored”/“remembered”.

The leanness could be increased during training by progressively trimming width/depth of weights, but I doubt if every model has this done.

SeriousM|6 years ago

> Radioactive data could also help protect against the misuse of particular data sets in machine learning.

This last sentence is the real reason behind this technology. Training data isn't cheap and I'm sure the paying party needs a watermark on it.

bordercases|6 years ago

"Watermarking" and trademarking can be different things. And access to data is already licensed.

I think you're right in that DRM systems are likely to be built on top of such infrastructure, but DRM has been broken in other contexts before and the system doesn't necessarily have to be used for DRM.

Nextgrid|6 years ago

The question would be whether it’s possible to make one’s behavioural data (online or offline) “radioactive” to then prove with a high degree of accuracy whether someone (like Facebook) is stalking you online to deliver targeted ads.

At the moment advertising providers use a lot of data for ad targeting, some of which is benign and/or acquired with informed consent. As a result it makes it impossible for the user to tell whether an ad was targeted to them based on data they consented to share or if the data used was data they didn’t want to be collected or used for advertising purposes.

ahartmetz|6 years ago

Could be easy enough if your opponent is not expecting it and deploying countermeasures. Something like a customized AdNauseam https://adnauseam.io/ that prefers clicking some particular crap you don't like.

heavyset_go|6 years ago

Large companies have no problem scraping data to be used to train their models, but they don't seem to feel the same way about you scraping theirs.

c1ccccc1|6 years ago

I'm surprised that it's even necessary to modify the dataset to achieve this. From what I've read, large models will often memorize their training data, and it seems like even with smaller models it should be possible to tell whether or not it was trained with some set of images, simply because the loss will be lower.

alcinos|6 years ago

It is already possible to know if a particular image has been used in training (see eg. https://arxiv.org/abs/1809.06396 by the same authors), but this new work also provides a p-value to give you a confidence on the result it gives.

Also notice that being proactive in watermarking the dataset can be desirable in some cases. For example, many datasets have large overlaps in the base images they use (but sometimes different labels), so it can be interesting to know whether a model was trained on "your" version of the dataset.

sillysaurusx|6 years ago

Training pipelines tend to perform image transformations before feeding it to the model, which complicates that.

eganist|6 years ago

Not mentioned thus far anywhere in the article or in comments: potentially weaponizing this against deep fakes.

What's to stop cameras from making raw photos "radioactive" from now on, making deepfakes traceable by tainting the image-sets on which the models generating the deepfakes were trained?

This isn't my field. I'm certain there's a workaround, but I'd suspect detecting sufficiently well-placed markers would require knowing the original data pre-mark, which should be impossible if the data is marked before it's written to camera storage. I haven't even fully thought out the logistics yet, such as how to identify the radioactive data.

But am I missing something? I feel like this is viable.

antpls|6 years ago

Ctrl+f shows no mention of the study of post-processing quantization nor pruning on their tampered dataset.

Overall, I instinctively think that one can create an NN architecture that is not affected, or even easily detect the tampered pictures with a pre processing pass, and untamper them.

NN are actually fuzzy, they support noise, you could add a bit more noise in the dataset to defeat the "radioactiveness".

Also, I'm pretty sure Facebook is not doing it to protect user data, but I have no proof.

mzs|6 years ago

but the noise FB adds is weighted a special way

mring33621|6 years ago

Have not yet read the article, as Facebook is blocked at work, but I would guess that this is mostly the application of steganographic techniques, to hide known patterns, in datasets that are likely to be stolen/borrowed for training.

Then observe the outputs of said models to try to discern related patterns.

applecrazy|6 years ago

I can see a similar technique in use to detect cheaters in Kaggle competitions.

kragen|6 years ago

This is a major plot point in Accelerando.

s_gourichon|6 years ago

I've read Accelerando and don't remember a major plot point that remotely looks like this. Perhaps are you thinking of one of the many secondary plot points.

Can you elaborate? Chapter? Context? Thanks.

Gusen|6 years ago

[deleted]