This works like TrueCrypt hidden volumes, which are volumes created in the free space of volumes.
This is not secure against multi-snapshot adversaries, like those who can take multiple snapshots of your storage at different times.
The solution is to hide the access pattern, for example by using a write-only oblivious RAM.
I'm currently working on a cloud database that uses searchable encryption. In a database the smallest things can hurt you, both the access and search pattern (must hide the encrypted data that satisfies the query condition or multiple query conditions, the volume of that data, and hide which queries are identical). And the attacker can have auxiliary information (known-data, known-query, inference).
On top of that the database must be verifiable (authentical, sound, complete, fresh). Encrypted and non-encrypted data might be searched together (partitioned data security). A database must be resizable, that's the point of a cloud database. And then there is data sharing. And it must be cheap. The existing solutions in the literature either compromise security or practical efficiency.
> The capability of plausible deniability is that the encrypted file is indistinguishable from noise; There is no way you can find out the amount of data stored in the cryptocontainer.
The problem with deniable encryption is: if the attacker can watch the file changes, one can determine the rough size of the data in the volume. The attacker makes note of where in the file changes occur. Once you get them to unlock the file you see if the data is shorter than the size of file changes. If so, you know there is more data.
Once an attacker can see your encrypted volume, you can no longer make changes to the hidden data.
This presumes that you’re working in a random-access manner with your data. A lot of deniable-encryption use-cases involve archival access (i.e. tossing new files into the container, with each written file then treated as immutable.)
An attacker watching the side-channel of your access pattern would just see you growing the volume. But that doesn’t tell them whether you’re adding new real data, or adding noise.
> The problem with deniable encryption is: if the attacker can watch the file changes, one can determine the rough size of the data in the volume.
I think it's even more basic than that. An attacker just needs to know you've used such a tool (by using FractalCrypt, eg) to know you're not really unlocking it.
After that it's pretty much game over, at least on being able to have deniability.
This is a specific threat model where the attacker can watch live file changes undetected. This may be acceptable e.g. for a laptop without historical archives of the file.
So... assuming there are bad guys demanding access to your data and you say "oh yes, I've been using this plausible deniability encryption/archive format", chances are that they're going to torture you for about exactly as long as they want until they get the data they want.
Also – assuming you have three layers of equal compressed size in your container, and you provide two passwords, can't your interrogator see that only 2/3 of the container file gets accessed, and has a reason to believe there's more data to be found?
The game theory here is interesting. If they are sure that you have the information (for example, the private key to your bitcoin wallet) then "plausible deniability" isn't really a useful feature. It means you can credibly bluff "The key isn't on this device", but they can just torture you until you reveal which device it is on.
In contrast, the threat model of Rubberhose[0] assumes that the secret police believe that you have an incriminating file on your device, but they aren't sure. That means if you are innocent and disclose all your passwords to them, they won't be satisfied and will have to keep on torturing you forever, hoping that you might give them the information you don't actually have. Therefore they have to convince you that there is some information that you could hand over which would satisfy them, and they mustn't over-estimate what information you have, otherwise they are committing to torturing you forever and there is no advantage to you disclosing even the information you do have.
In countries like the UK where you can be jailed or fined for not giving a password, this provides a way to do that and escape jail. Truecrypt did it and after the developers stopped supporting that, VeraCrypt came along.
You obviously don't reveal that you are using a plausible denial storage method. Give it a zip extension and rename the application that you access with to something like Zip Archiver. "It's an encrypted zip file and the password is ...." How do they know its not zip or that's there's secret data there?
Partly because these systems are designed to destroy the data if not unlocked. Your "plausible" container, if not unlocked, makes the rest of the container look like free space - i.e. destroyed by an OS not aware it shouldn't write to it.
Which is common with HDD block-device format containers (not sure this thing makes as much sense) anyway: if my laptop here (which is encrypted) gets unlocked with 2 passwords, you would need to independently verify that in fact I normally used 3 and the idea is you can't prove that the "free space" is actually not just normal freespace on my HDD.
Combined with a TPM chip and not having any recovery codes and the HDD can't be realistically extracted except by nation-state level actors with a motivated interest.
Also why would "truly secret" data be large in size to start with? The more likely relationship would be 100:10:1 or greater in terms of "plausible" to "implausible".
One of the best mitigations against rubber-hose and similar attacks is a hardware key. If you leave it at home, you can't be compelled to decrypt unless an attacker also breaks into your home and searches the place.
In a pinch, you might be able to conveniently "lose" your hardware key, or smash it if you hear the front door break open. Doing so effectively erases your data without actually erasing it, since it's unreadable without the key.
To expand on your second point, these kinds of systems should let you set the fixed-size of the volume, like 1G or 5G, with the payload being unrelated.
> Whereas, using FractalCrypt you can safely give away the keys to unclassified volumes, and there is no way to prove that there are actually more volumes than you have disclosed.
So the problem is that you're going to use what exactly to decrypt it to prove plausible deniability? FractalCrypt? And then what do the adversaries do once they google FractalCrypt and see the phrase above?
Once your adversaries know you're using FractalCrypt, you've negated any plausible deniability.
Sure, they can keep beating you or keep you locked up, but you know that they will anyway because they'll never be satisfied with the amount you've given them. There could always be more. So there is no point giving up all of them. What that means for your mental state and the likelihood that you will give them all up, I don't know.
On the other hand, if you don't use this, they'll just beat you or keep you locked up until you've given the key to all the volumes they can see.
Which type is better depends on your situation. You can't just say it's pointless because they know what the purpose of this encryption software is.
There is no way for the attacker to know how many keys you have. So you can give the attacker 2 keys, while you have your actual sensitive data behind the 5th one.
It could still be a challenge to convince the attacker that you really only had n-1 keys, so you may need to include plausibly-sensitive data in earlier layers.
> First, it creates a cryptocontainer of a user-specified size, filled with random characters. To create the first volume, the program archives the user-specified files to the beginning of the cryptocontainer and encrypts it using the first key.
This is problematic; key reveal gives important metadata hints as to size and location of other volume(s).
This could be redeemed by encoding offset and size parameters in the key. These could be randomized or fixed at initialization.
Great ambition, I'll be keeping tabs on how this evolves.
1. Create 10 files of encrypted random data. Discard the passwords.
2. Replace a random selection of these files with encrypted fake data, using different passwords for each one.
3. Replace one file with the actual encrypted data.
If challenged, openly share this scheme you used with your opponent. Under durress give up the passwords to the fake data files. Insist that one fake data file is the real data and that the keys to all the other files were discarded.
Get punished anyway because "only bad people have things to hide" would be my guess. It's a shame we even need to have plausible deniability in the first place.
Cant speak for this implementation, but deniable encryption has an additional benefit over just encrypting stuff that even if you are actually targeted they need to get really really deep into your life to even know it exists and where.
Be that your super secret draft annual report or a fat bitcoin wallet, it will pass a casual inspection and they will move onto more interesting targets.
Old but relevant https://defuse.ca/truecrypt-plausible-deniability-useless-by... .Be careful with plausible deniability depending on your threat model as it's only efficient against a soft "lawful" adversary. It's probably a terrible idea against an adversary willing to resort to "enhanced interrogation techniques" (not mentioning the usual 5$ xkcd).
This article is about the problem of using TrueCrypt, which allows you to create a single hidden layer;
In case TrueCrypt usage is detected, the requirement to give out the hidden layer password would be quite reasonable, because then you can be sure that the container is decrypted in its entirety.
In the case of FractalCrypt, only part of the container can be decrypted, even knowing all the passwords; hence, denying the existence of truly secret data can be quite convincing, for example, by first giving out unclassified, and after a long interrogation giving out semisecret ones.
In addition, the article states that
> In other scenarios the feature can be useful. If the attacker has limited resources (i.e. can only torture you for 30 minutes), or if you are "innocent until proven guilty" under the law, then it can be advantageous to use a hidden volume. Just don't recommend TrueCrypt to your friends in North Korea, or at least make sure they use a hidden volume.
In most situations, such as a police raid or criminal robbery, you will not be tortured to death.
However, it is really better not to use FractalCrypt in North Korea.
for use within linux I wouldn't trust anything but luks. speaking of luks, it can theoretically accomplish this with offsets and payload align, but I'm not sure I'd trust it not to fudge up when reaching allocation limits.
I am not an expert in encryption or plausible deniability, but couldn't steganography be used to conceal sensitive information? I realize steganography would be problematic if you need to store large amounts of information, or need to modify the information often, but couldn't it be used for small amounts of information that doesn't change often?
Couldn't you tell an attacker "It's just a picture of my cat."?
I'm not an expert either, but I think you could also use a one time pad to do exactly that.
You have a picture of a cat, and 2 one time pads (OTPs). OTP #1 is the key for your real data, and you can generate OTP #2 such that it decrypts the ciphertext (in this case, an image) into whatever data you pick.
Whether this is practical is a completely different question though.
https://github.com/gzm55/dpad-enc
Here is a poc to encrypt just static pieces of short secrets in to a large enough file, and decrypt only one secret by select a correct password.
[+] [-] Comevius|4 years ago|reply
This is not secure against multi-snapshot adversaries, like those who can take multiple snapshots of your storage at different times.
The solution is to hide the access pattern, for example by using a write-only oblivious RAM.
I'm currently working on a cloud database that uses searchable encryption. In a database the smallest things can hurt you, both the access and search pattern (must hide the encrypted data that satisfies the query condition or multiple query conditions, the volume of that data, and hide which queries are identical). And the attacker can have auxiliary information (known-data, known-query, inference). On top of that the database must be verifiable (authentical, sound, complete, fresh). Encrypted and non-encrypted data might be searched together (partitioned data security). A database must be resizable, that's the point of a cloud database. And then there is data sharing. And it must be cheap. The existing solutions in the literature either compromise security or practical efficiency.
[+] [-] anonypla|4 years ago|reply
[+] [-] temptemptemp111|4 years ago|reply
[+] [-] cdumler|4 years ago|reply
The problem with deniable encryption is: if the attacker can watch the file changes, one can determine the rough size of the data in the volume. The attacker makes note of where in the file changes occur. Once you get them to unlock the file you see if the data is shorter than the size of file changes. If so, you know there is more data.
Once an attacker can see your encrypted volume, you can no longer make changes to the hidden data.
[+] [-] derefr|4 years ago|reply
An attacker watching the side-channel of your access pattern would just see you growing the volume. But that doesn’t tell them whether you’re adding new real data, or adding noise.
[+] [-] thesz|4 years ago|reply
Some log-oriented file systems may provide insight into changes made. From what I know, ZFS is one example of such file system and btrfs is another.
[+] [-] bb88|4 years ago|reply
I think it's even more basic than that. An attacker just needs to know you've used such a tool (by using FractalCrypt, eg) to know you're not really unlocking it.
After that it's pretty much game over, at least on being able to have deniability.
[+] [-] 3np|4 years ago|reply
[+] [-] anigbrowl|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] akx|4 years ago|reply
Also – assuming you have three layers of equal compressed size in your container, and you provide two passwords, can't your interrogator see that only 2/3 of the container file gets accessed, and has a reason to believe there's more data to be found?
[+] [-] dane-pgp|4 years ago|reply
The game theory here is interesting. If they are sure that you have the information (for example, the private key to your bitcoin wallet) then "plausible deniability" isn't really a useful feature. It means you can credibly bluff "The key isn't on this device", but they can just torture you until you reveal which device it is on.
In contrast, the threat model of Rubberhose[0] assumes that the secret police believe that you have an incriminating file on your device, but they aren't sure. That means if you are innocent and disclose all your passwords to them, they won't be satisfied and will have to keep on torturing you forever, hoping that you might give them the information you don't actually have. Therefore they have to convince you that there is some information that you could hand over which would satisfy them, and they mustn't over-estimate what information you have, otherwise they are committing to torturing you forever and there is no advantage to you disclosing even the information you do have.
[0] https://en.wikipedia.org/wiki/Rubberhose_%28file_system%29
[+] [-] davidhbolton|4 years ago|reply
You obviously don't reveal that you are using a plausible denial storage method. Give it a zip extension and rename the application that you access with to something like Zip Archiver. "It's an encrypted zip file and the password is ...." How do they know its not zip or that's there's secret data there?
[+] [-] XorNot|4 years ago|reply
Which is common with HDD block-device format containers (not sure this thing makes as much sense) anyway: if my laptop here (which is encrypted) gets unlocked with 2 passwords, you would need to independently verify that in fact I normally used 3 and the idea is you can't prove that the "free space" is actually not just normal freespace on my HDD.
Combined with a TPM chip and not having any recovery codes and the HDD can't be realistically extracted except by nation-state level actors with a motivated interest.
Also why would "truly secret" data be large in size to start with? The more likely relationship would be 100:10:1 or greater in terms of "plausible" to "implausible".
[+] [-] Seirdy|4 years ago|reply
In a pinch, you might be able to conveniently "lose" your hardware key, or smash it if you hear the front door break open. Doing so effectively erases your data without actually erasing it, since it's unreadable without the key.
[+] [-] sildur|4 years ago|reply
[+] [-] moritonal|4 years ago|reply
[+] [-] bb88|4 years ago|reply
So the problem is that you're going to use what exactly to decrypt it to prove plausible deniability? FractalCrypt? And then what do the adversaries do once they google FractalCrypt and see the phrase above?
Once your adversaries know you're using FractalCrypt, you've negated any plausible deniability.
[+] [-] lucb1e|4 years ago|reply
On the other hand, if you don't use this, they'll just beat you or keep you locked up until you've given the key to all the volumes they can see.
Which type is better depends on your situation. You can't just say it's pointless because they know what the purpose of this encryption software is.
[+] [-] chrisbuc|4 years ago|reply
[+] [-] hartator|4 years ago|reply
[+] [-] matharmin|4 years ago|reply
It could still be a challenge to convince the attacker that you really only had n-1 keys, so you may need to include plausibly-sensitive data in earlier layers.
[+] [-] 3np|4 years ago|reply
This is problematic; key reveal gives important metadata hints as to size and location of other volume(s).
This could be redeemed by encoding offset and size parameters in the key. These could be randomized or fixed at initialization.
Great ambition, I'll be keeping tabs on how this evolves.
[+] [-] istjohn|4 years ago|reply
1. Create 10 files of encrypted random data. Discard the passwords.
2. Replace a random selection of these files with encrypted fake data, using different passwords for each one.
3. Replace one file with the actual encrypted data.
If challenged, openly share this scheme you used with your opponent. Under durress give up the passwords to the fake data files. Insist that one fake data file is the real data and that the keys to all the other files were discarded.
[+] [-] tsujp|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] DarkmSparks|4 years ago|reply
Be that your super secret draft annual report or a fat bitcoin wallet, it will pass a casual inspection and they will move onto more interesting targets.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] anonypla|4 years ago|reply
[+] [-] zorggish|4 years ago|reply
In addition, the article states that > In other scenarios the feature can be useful. If the attacker has limited resources (i.e. can only torture you for 30 minutes), or if you are "innocent until proven guilty" under the law, then it can be advantageous to use a hidden volume. Just don't recommend TrueCrypt to your friends in North Korea, or at least make sure they use a hidden volume.
In most situations, such as a police raid or criminal robbery, you will not be tortured to death. However, it is really better not to use FractalCrypt in North Korea.
[+] [-] willbank|4 years ago|reply
Wouldn’t multisig be preferable? “Umm, look you’re going to need to go torture my brother 80 miles away at the same time…”
[+] [-] tifadg1|4 years ago|reply
[+] [-] m3kw9|4 years ago|reply
[+] [-] inetsee|4 years ago|reply
Couldn't you tell an attacker "It's just a picture of my cat."?
[+] [-] shpongled|4 years ago|reply
You have a picture of a cat, and 2 one time pads (OTPs). OTP #1 is the key for your real data, and you can generate OTP #2 such that it decrypts the ciphertext (in this case, an image) into whatever data you pick.
Whether this is practical is a completely different question though.
[+] [-] gzm55|4 years ago|reply
[+] [-] FpUser|4 years ago|reply
[+] [-] CodeWriter23|4 years ago|reply
[+] [-] ComodoHacker|4 years ago|reply
Also, is it a coincidence that the "Made with C++" badge is colored red?