I don't understand how the "proof" part works, like, what part of the input to the "proof generation" algorithm is so inherently tied to the real world that one cannot feed it "fake" data ?
My understanding is it can't. The proof is "this photo was taken with this real camera and is unmodified". There's no way to know if the photo subject is another image generated by AI, or a painting made by a human etc.
I remember when snapchat were touting "send picture that delete within timeframes set by you!" and all that would happen is you'd turn to your friend and have them take a picture of your phone.
In the above case, the outcome was messy. But with some effort, people could make reasonable quality "certified" pictures of damn near anything by taking a picture of a picture. Then there is the more technical approach of cracking a system physically in your hands so you can sign whatever you want anyway...
I think the aim should be less on the camera hardware attestation and more on the user. "It is signed with their key! They take responsibility for it!"
But then we need:
1. fully active and scaled public/private key encryption for all users for whatever they want to do
2. a world where people are held responsible for their actions...
Perhaps if it measured depth it could detect "flat surface" and flag that in the recorded data. Cameras already "know" what is near or far simply by focusing.
I wonder if a 360 degree image in addition to the 'main' photo could show that the photo was part of a real scene and not just a photo of an image? Not proof exactly but getting closer to it.
If someone cared enough to spend money on this I think it would be an easy to medium difficulty project to use an FPGA and a CSI-2 IP to pretend to be the sensor. Good luck fixing that without baking a secure element into your sensor.
ConorSheehan1|4 months ago
_carbyau_|4 months ago
I remember when snapchat were touting "send picture that delete within timeframes set by you!" and all that would happen is you'd turn to your friend and have them take a picture of your phone.
In the above case, the outcome was messy. But with some effort, people could make reasonable quality "certified" pictures of damn near anything by taking a picture of a picture. Then there is the more technical approach of cracking a system physically in your hands so you can sign whatever you want anyway...
I think the aim should be less on the camera hardware attestation and more on the user. "It is signed with their key! They take responsibility for it!"
But then we need:
1. fully active and scaled public/private key encryption for all users for whatever they want to do
2. a world where people are held responsible for their actions...
I'm not sure which is more unrealistic.
exodust|4 months ago
ija|4 months ago
ellenhp|4 months ago
ajdlinux|4 months ago
whatsupdog|4 months ago