A naive person googling about TrueCrypt and stumbling on this article (well written with an authoritative tone) will think that TrueCrypt is completely cracked, and not bother to use it.
This research is interesting and useful, but do we really want to scare off people from using TrueCrypt?
Couldn't you add a paragraph at the top or to the side that says, for example:
"If TrueCrypt is used in the intended way, i.e., you finish your work with TrueCrypt, dismount the TrueCrypt volume, and then shut down your computer, then the data protected by TrueCrypt is secure if your computer is lost, stolen, or copied at that point."
I understand your intended audience (I'm one of them). But TrueCrypt is the best protection we've got (in terms of price (free), quality, license terms, multi-platform support, algorithm choice, etc.). I presume the author likes it and uses it himself too.
We want more people to use it, right? So let's at least make it clear that it isn't cracked or broken when used in the intended manner.
We want more people to use it intelligently. If they're not capable of understanding the implications of this research, are they really going to be able to use this software effectively? How much hand holding do we need to do, here?
Yes and no. On one hand, we want to specify that it's not vulnerable, on the other, it is important to discuss this thoroughly. Ensure that people can understand the ramifications of this discovery - short of doing nothing, nothing is less secure than using security software "badly", or "incorrectly" - that false visage of security can be deceptive and lead to other bad practices.
I just noticed that the author commented that he attempted to add some clarity to this effect.
The addition you're recommending is somewhat orthogonal to the article. The update at the end of the article clarifies that all one must do, strictly speaking, to protect against the weakness they describe is to power the computer down. If an adversary is knocking down your door, it's seems more important to get the keys out of RAM than to first ensure filesystem and OS integrity by saving/dismounting and cleanly shutting down.
In this blog post, forensic experts realize TrueCrypt uses headers.
But theres still a valuable lesson: a half-encrypted system is a not encrypted system, and it will leak information. Theres a paper on this from 2008 I think, before TrueCrypt implemented full operating system encryption:
The new era of encryption will be marked not by making existing encryption solutions more secure, but by concealing the very act of existence of encrypted data.
"Here's my data but you cannot read it" - does not runs so well with courts, high stakes competitors and deep pocketed enemies.
Once it is known "what to crack" the "how to" solution will be found.
"Rubber hose cryptography" is one of these :)
If you're not possessing anything to crack (or so "they" think), you're safe :)
Interestingly, this is something Julian Assange worked on several year before starting Wikileaks:
Starting around 1997, he co-invented the Rubberhose deniable encryption
system, a cryptographic concept made into a software package for the Linux
operating system designed to provide plausible deniability against
rubber-hose cryptanalysis;[68] he originally intended the system to be used
"as a tool for human rights workers who needed to protect sensitive data in
the field."
The flip side of this is that it becomes impossible for someone to prove that they haven't got an encrypted volume stored somewhere. It will be interesting to see in which way the courts go with this.
Implementations of crypto, such as Truecrypt, rely on algorithms/ciphers such as AES which (in some modes) basically appears random... But the appearance of randomness is not enough if someone is convinced there is meaningful data there. Of course, a break such as "we can tell if there's a hidden truecrypt volume" is bad, and if I recall correctly there are ways of doing this now.
You'd need to basically never transmit the data, transmission automatically implies there is something there. If you didn't transmit, just used the data locally... And it appeared random, you'd have a pretty solid case for "they can't know". But if you slip up just once it's all over. They know.
By the way, this is exactly why the NSA will not catch many terrorists. You can always hide data and you can always hide communication (which is just data written and read by two different endpoints).
I am not following you. Concealing? do you mean physically hiding the media on which it is stored? Like a flash chip in a tie clasp?
If you mean concealing as in hidden partitions, data streams, or digital Stenography - these are all easily detectable upon close inspection. If there is extra bits where none are expected, this becomes a giveaway. Perhaps enough misdirection and a custom strategy of hiding could further obfuscate the location and content of the data, but as for hiding it's existence - this is not easily accomplished (if even possible).
Interesting writeup and cool that Mr. Ligh has provided these plugins / tools. On the whole though, doesn't appear to contain much novel information.
Of course, partition-only encryption has weaknesses in that the OS may store data in another partition (i.e. you've encrypted the "D:" drive but Windows just dumps a cached file to "C:", let alone the whole pagefile challenge). So you need to trust your OS to not write the masterkey to disk, which is widely acknowledged. I personally run with no page file, so memory ought not to be written to disk by the OS itself (barring a malicious adversary), although this solution isn't the best for someone on 1GB RAM.
Full-disk encryption would block this attack, i.e. encrypted swap on Linux (crypttab makes this quite easy) or system-drive on Truecrypt. Even if it's dumped to disk, you can't get it, again barring online access to the system. Online access this is all null and void regardless as they could just issue commands to dump memory to disk no matter what you've done!
An advantage you gain right off the bat is that patterns
in AES keys can be distinguished from other seemingly
random blocks of data. This is how tools like aeskeyfind
and bulk_extractor locate the keys in memory dumps, packet
captures, etc. In most cases, extracting the keys from RAM
is as easy as this:
$ ./aeskeyfind Win8SP0x86.raw
Shouldn't it be possible to store an AES key in a way that's indistinguishable from random data?
At PrivateCore, we keep key material (and the entire Linux stack) pinned in the CPU cache, then encrypt main memory. This would thwart physical memory extraction attacks, like cold booting, Fireware, Thunderbolt, NV-DIMMs, bus analyzers, malicious RAM, etc.
Note, that doesn't help if someone compromises the software stack and extracts memory contents logically. A compromised kernel running in cache can just decrypt memory contents.
I thought that this issue was already well known? Paging of virtual memory causes keys to be written to disk...
But this article. Wow:
"This is a risk that suspects have to live with, and one that law enforcement and government investigators can capitalize on"
I love it. Only criminals use TC so let's call all TC users 'suspects'.
What about people who have portable devices and want to store sensitive financial or medical information? What about people who want to backup this information into a cloud?
How does secure virtual memory work with this? If data in ram, or at least is encrypted when written to disk, wouldn't that stop this? Do I know what I'm talking about?
This article is just an analysis of one of the inherent and well-documented weaknesses in truecrypt: the fact that the encryption key must stay in RAM the entire time you are using an encrypted volume. So, as has always been the case, treat the contents of your RAM as precious when a truecrypt volume is mounted.
It means if you're worried about the contents of your encrypted drives being uncovered, you need to make sure no malicious processes gain access to a dump of your system's memory while it's booted / running / encrypted drives are mounted.
Forgive a fool. But are the volatility plugins used (truecryptmaster and truecryptsummary) only provided to students of the official volatility training or is it released somewhere? Can't find it on their Google code page.
Would it help to store a long string of data in memory, say 10mb. Then place the key somewhere in the middle of it? The placement could be based on the password. Just an idea..
It would not help. Volatility is not doing any searching, instead it is reading TC's data structures in memory to find the key. Basically replicating TC's algos offline.
Nothing's 100% safe - if data exists and can be read by its owner, de facto it can be read by someone else. The only "safe" data is that which never leaves your brain.
[+] [-] alister|12 years ago|reply
This research is interesting and useful, but do we really want to scare off people from using TrueCrypt?
Couldn't you add a paragraph at the top or to the side that says, for example:
"If TrueCrypt is used in the intended way, i.e., you finish your work with TrueCrypt, dismount the TrueCrypt volume, and then shut down your computer, then the data protected by TrueCrypt is secure if your computer is lost, stolen, or copied at that point."
I understand your intended audience (I'm one of them). But TrueCrypt is the best protection we've got (in terms of price (free), quality, license terms, multi-platform support, algorithm choice, etc.). I presume the author likes it and uses it himself too.
We want more people to use it, right? So let's at least make it clear that it isn't cracked or broken when used in the intended manner.
[+] [-] arbitrage|12 years ago|reply
[+] [-] FireBeyond|12 years ago|reply
I just noticed that the author commented that he attempted to add some clarity to this effect.
[+] [-] imhlv2|12 years ago|reply
[+] [-] mcgwiz|12 years ago|reply
[+] [-] altrego99|12 years ago|reply
If so, I'd say anyone who participates in Truecrypt's development is also intended audience.
[+] [-] kabouseng|12 years ago|reply
[+] [-] revelation|12 years ago|reply
But theres still a valuable lesson: a half-encrypted system is a not encrypted system, and it will leak information. Theres a paper on this from 2008 I think, before TrueCrypt implemented full operating system encryption:
https://www.schneier.com/paper-truecrypt-dfs.html
[+] [-] gesman|12 years ago|reply
"Here's my data but you cannot read it" - does not runs so well with courts, high stakes competitors and deep pocketed enemies.
Once it is known "what to crack" the "how to" solution will be found. "Rubber hose cryptography" is one of these :)
If you're not possessing anything to crack (or so "they" think), you're safe :)
[+] [-] ot|12 years ago|reply
[+] [-] deeringc|12 years ago|reply
[+] [-] randywaterhouse|12 years ago|reply
You'd need to basically never transmit the data, transmission automatically implies there is something there. If you didn't transmit, just used the data locally... And it appeared random, you'd have a pretty solid case for "they can't know". But if you slip up just once it's all over. They know.
[+] [-] scrrr|12 years ago|reply
[+] [-] vezzy-fnord|12 years ago|reply
[+] [-] goggles99|12 years ago|reply
If you mean concealing as in hidden partitions, data streams, or digital Stenography - these are all easily detectable upon close inspection. If there is extra bits where none are expected, this becomes a giveaway. Perhaps enough misdirection and a custom strategy of hiding could further obfuscate the location and content of the data, but as for hiding it's existence - this is not easily accomplished (if even possible).
[+] [-] randywaterhouse|12 years ago|reply
Of course, partition-only encryption has weaknesses in that the OS may store data in another partition (i.e. you've encrypted the "D:" drive but Windows just dumps a cached file to "C:", let alone the whole pagefile challenge). So you need to trust your OS to not write the masterkey to disk, which is widely acknowledged. I personally run with no page file, so memory ought not to be written to disk by the OS itself (barring a malicious adversary), although this solution isn't the best for someone on 1GB RAM.
Full-disk encryption would block this attack, i.e. encrypted swap on Linux (crypttab makes this quite easy) or system-drive on Truecrypt. Even if it's dumped to disk, you can't get it, again barring online access to the system. Online access this is all null and void regardless as they could just issue commands to dump memory to disk no matter what you've done!
[+] [-] mmastrac|12 years ago|reply
[+] [-] cbr|12 years ago|reply
[+] [-] sweis|12 years ago|reply
Note, that doesn't help if someone compromises the software stack and extracts memory contents logically. A compromised kernel running in cache can just decrypt memory contents.
[+] [-] attrc|12 years ago|reply
http://www.reddit.com/r/netsec/comments/1va904/truecrypt_mas...
[+] [-] crazytony|12 years ago|reply
But this article. Wow: "This is a risk that suspects have to live with, and one that law enforcement and government investigators can capitalize on"
I love it. Only criminals use TC so let's call all TC users 'suspects'.
What about people who have portable devices and want to store sensitive financial or medical information? What about people who want to backup this information into a cloud?
[+] [-] unfamiliar|12 years ago|reply
[+] [-] jtth|12 years ago|reply
[+] [-] Xymak1y|12 years ago|reply
[+] [-] shawnz|12 years ago|reply
[+] [-] pudquick|12 years ago|reply
[+] [-] kiwihuck|12 years ago|reply
[+] [-] attrc|12 years ago|reply
[+] [-] ChrisAntaki|12 years ago|reply
[+] [-] attrc|12 years ago|reply
[+] [-] edwardy20|12 years ago|reply
[+] [-] lisnake|12 years ago|reply
[+] [-] sgloutnikov|12 years ago|reply
[+] [-] ye|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] mrfusion|12 years ago|reply
[+] [-] eponeponepon|12 years ago|reply
[+] [-] brohee|12 years ago|reply