top | item 28433644

(no title)

kc0bfv | 4 years ago

In this scenario, it would almost certainly have to be that manufacturers would have to build cameras that cryptographically sign the images and videos. The cameras would have to be able to have that ability, install of the manufacturers doing the signing.

And then what would the Blockchain provide in this case? A chain of cryptographically signed certificates back to a manufacturer is basically the same system we use on the web today TLS certs. No Blockchain required.

And a major problem with that system is making sure the camera only signs genuine images. A nation state actor, or even a large political operation, is going to have an incentive to bypass the protections on that camera - perhaps just driving what the CCD is telling the rest of the camera - so they can produce signed fakes.

That's if they can't just get the private key off the camera, perhaps through a side channel attack - which can be pretty tough to pull off but is very tough to really defend against. Get a private key, game is over for the fraudster.

discuss

order

hdm41bc|4 years ago

The way I thought that the blockchain would be employed is to use it to track transformations of the image. Post-processing, adding captions, and what not. This would provide an audit trail of changes to the original source image.

If, in fact, we can’t reliably sign the source image as authentic, then the rest of the system falls apart. It seems like this is the crux of the problem.

someguyorother|4 years ago

That seems to be a DRM problem. Let's say that you want the camera to track all modifications of the picture. Then, analogous to DRM, there's nothing stopping the forger from just replacing the CCD array on the camera with a wire connected to a computer running GIMP.

To patch the "digital hole", it would be necessary to make the camera tamperproof, or force GIMP to run under a trusted enclave that won't do transformations without a live internet connection, or create an untamperable watermark system to place the transform metadata in the picture itself.

These are all attempted solutions to the DRM problem. And since DRM doesn't work, nor would this, I don't think.

grumbel|4 years ago

> And then what would the Blockchain provide in this case?

The main thing a blockchain provides is a cryptographically secured logbook of history. It doesn't guarantee you that the entries in the logbook are true, but it gets a lot harder to fake history when you can't go back to change your story. You have to fake it right when you claim it happened and hope that nobody else records anything in the logbook that conflicts with your story.

kc0bfv|4 years ago

I can see how then a journalist source could use this to help prove their integrity. And I like that as a solution for that...

But - I don't really see that as the issue today. Those outlets that are interested in lying don't have to participate in this Blockchain chain of proof system. The malicious entities like political groups cited in the article definitely don't have to participate. It's still really on the viewer/spreader of the fake images/misinformation to verify the images, and to only rely on verifiable images. But I think a system like that would leave out most of the population who simply don't care.

Perhaps my worry about leaving out that chunk of population means this problem is unsolvable - and therefore my point is unfair. But I do think we need some solutions that are practical for widespread acceptance and use. If I can't imagine my parents (who are tech literate) would participate, and can't imagine some of my non-nerd friends wanting to participate, I don't think it solves the problems I'd like systems like this to solve.

kkielhofner|4 years ago

The problem with using certificates is any media signed by a party (by nature) traces directly back to that source/certificate. With a certificate-based approach I can imagine something like Shodan meets Google Image Search being used to make it easier to source media for the purposes of enhancing training for an ML model. Needless to say I have serious concerns about this approach.

This is why our approach only embeds a random unique identifier in the asset and requires a client to extract the media identifier to verify integrity, provenance, etc.

There are also two problems at play here - are we trying to verify this media as being as close to the source photons as possible, or are we trying to verify this is what the creator intended to be attributable to them and released for consumption? The reality is everyone from Kim Kardashian to the Associated Press performs some kind of post-sensor procession (anything from cropping, white balance, etc to HEAVY facetunning, who knows what).

kc0bfv|4 years ago

Ok - I like this for some use cases. To restate my understanding so you can tell me I'm wrong if I am:

I think that it's still the user's job to make sure that they are skeptical of the provenance of any photos that claim to be from, say, the NY Times, that are not viewed in the NYT's viewer (if they were using your system). And then, they should still trust the image only as far as they trust the NYT. But if they're viewing the image the "right" way they can generally believe it's what the NYT intended to put out.

And perhaps, over time, user behavior would adapt to fit that method of media usage, and it would be commonplace.

I am skeptical that that "over time" will come to pass. And I think that users will not be apply appropriate skepticism or verification to images that fit their presuppositions. And I think malicious players (like some mentioned in the article) will attempt to build and propagate user behavior that goes around this system (sharing media on platforms that don't use the client, for instance).

And I guess making that broad set of problems harder or impossible is really what I'd like to solve. I can see how your startup makes good behavior possible, and I guess that's a good first step and good business case.