top | item 28433558

(no title)

hdm41bc | 4 years ago

Is this a solvable problem by requiring camera manufacturers cryptographically sign photos and videos created on those devices? If that’s in place then it seems like it could be the basis for chain of custody of journalistic images backed by a blockchain. This seems like the only viable solution to me since any AI powered solution would just be a cat and mouse game.

discuss

order

kc0bfv|4 years ago

In this scenario, it would almost certainly have to be that manufacturers would have to build cameras that cryptographically sign the images and videos. The cameras would have to be able to have that ability, install of the manufacturers doing the signing.

And then what would the Blockchain provide in this case? A chain of cryptographically signed certificates back to a manufacturer is basically the same system we use on the web today TLS certs. No Blockchain required.

And a major problem with that system is making sure the camera only signs genuine images. A nation state actor, or even a large political operation, is going to have an incentive to bypass the protections on that camera - perhaps just driving what the CCD is telling the rest of the camera - so they can produce signed fakes.

That's if they can't just get the private key off the camera, perhaps through a side channel attack - which can be pretty tough to pull off but is very tough to really defend against. Get a private key, game is over for the fraudster.

hdm41bc|4 years ago

The way I thought that the blockchain would be employed is to use it to track transformations of the image. Post-processing, adding captions, and what not. This would provide an audit trail of changes to the original source image.

If, in fact, we can’t reliably sign the source image as authentic, then the rest of the system falls apart. It seems like this is the crux of the problem.

grumbel|4 years ago

> And then what would the Blockchain provide in this case?

The main thing a blockchain provides is a cryptographically secured logbook of history. It doesn't guarantee you that the entries in the logbook are true, but it gets a lot harder to fake history when you can't go back to change your story. You have to fake it right when you claim it happened and hope that nobody else records anything in the logbook that conflicts with your story.

kkielhofner|4 years ago

The problem with using certificates is any media signed by a party (by nature) traces directly back to that source/certificate. With a certificate-based approach I can imagine something like Shodan meets Google Image Search being used to make it easier to source media for the purposes of enhancing training for an ML model. Needless to say I have serious concerns about this approach.

This is why our approach only embeds a random unique identifier in the asset and requires a client to extract the media identifier to verify integrity, provenance, etc.

There are also two problems at play here - are we trying to verify this media as being as close to the source photons as possible, or are we trying to verify this is what the creator intended to be attributable to them and released for consumption? The reality is everyone from Kim Kardashian to the Associated Press performs some kind of post-sensor procession (anything from cropping, white balance, etc to HEAVY facetunning, who knows what).

amelius|4 years ago

This might lead into a direction we don't want to go. E.g. camera manufacturers can add DRM so you can't copy photos and movies, fingerprinting for CSAM, etc.

Just give me the raw image sensor.

antifa|4 years ago

I can totally see someone trying to set this up, then instead of any of the benefits actually working as advertised, photography costs $80 in etherium per photo.

PeterisP|4 years ago

Assuming that media and consumers will want to consider photos/videos of random everyday people, it would require that:

1. All manufacturers, including manufacturers of shoddy but cheap mass-market devices (ones that a not-wealthy person would have on them to document interesting events) support that cryptographic signing in all their devices;

2. None of the signing keys/secrets can be ever extracted from any such devices;

3. None of these manufacturers or their employees ever generate a valid key (or a million valid keys) that would have been put in a camera of the same model that respected journalists use, but are just available to the government where the factory resides, or just for sale on some internet forum to sign whatever misinformation a resourceful agent wants to publish.

Signing pictures can mostly work with respect to a limited set of secure, trusted hardware manufactured and delivered with a trusted chain of supply, where a single organization is in charge of the keys used and the set of keys is small enough to control properly. E.g. Reuters might use it to certify photos taken by Reuters people using specific Reuters-controlled camera hardware (and they can do that just by ordinary signing of what they publish). But there's no motivation for most people in the world to accept that overhead for the devices they use for photography and video, and there's no single authority to control the keys that everybody else would trust due to international relations.

MayeulC|4 years ago

I was speaking with someone from the military. It seems that's more or less required in some cases for interrogations, taking pictures of proofs, etc. With time-stamping and GPS coordinates using purpose-built cameras.

I can easily imagine the camera digitally signing pictures and asking for notarization. But there will always be an analog hole -- and the first faked pictures weren't altered after shooting, the scene was.

I'm all for fakes being widespread. It makes people more critical of what they see, and protects them against the few that had this capability before.

mindslight|4 years ago

No. "Trusted" hardware merely creates the illusion of a secure system, while allowing those with the resources to defeat it anyway. First, there would be 20 years of bugs after having to root your camera became a thing. Two, unless the sensor modules themselves are made into trusted components, it would be relatively easy to wire up a mock sensor to the "secure" processor. And three, camera makers would eventually be pressured to undermine the system's security invariants, ala Apple.

tsimionescu|4 years ago

Wouldn't filming a good quality screen with a higher refresh rate than the camera FPS defeat this method entirely? Especially so if the desired result is not itself high-def.

wussboy|4 years ago

It is solvable by punishing anyone who posts fake pictures. Since the problem of bad actors in society has existed for millennia, we anyway know a dozen ways to deal with it. We just haven’t really bothered to apply any of them to the Internet.

Why we haven’t done that is a different but equally fascinating question.