Let's say that you're using OTR to provide very strong end-to-end encryption for a conversation between yourself and a buddy, Bob. Maybe he's in a hostile area, and you're worried that if his government sniffs his traffic, that he could be executed for speaking to Americans.
Data in transit that is intercepted, if configured correctly, is almost certainly safe. No one will be able to immediately decrypt it because of the strong encryption.
So are you safe?
Probably not. The next step that government would take would be to raid your friend Bob's apartment, arrest him, and take his hard disk. His OTR key (and, if using Pidgin, account credentials in plaintext if stored) is plainly available on the disk. You now have the private key.
But what if he used Truecrypt or PGP full-disk encryption? His data would be safe from decryption then, right?
Sort of. If they're trying to break the actual encryption, they'd likely be unable to do so. Unfortunately, the weak point for Truecrypt disks or volumes isn't the crypto... it's the passphrase. The passphrase can be brute-forced significantly more easily than breaking the encryption itself. Furthermore, as xkcd so accurately pointed out, a hostile government will throw you in prison (or, worse, hit you repeatedly with a wrench) until you divulge your passphrase and data.
Encryption is great, and I encourage everyone to use reliably strong crypto. Will that keep your data safe from the criminals that stole your work laptop? Absolutely. Will it keep your data safe from the NSA? You're kidding yourself.
Remember that post a little while ago about how most logical fallacies aren't actually logical fallacies? Here, you are committing an actual logical fallacy. It's called "shifting the goalposts."
The article is in response to a dragnet surveillance program, where everyone's communications are watched and presumably datamined. It's very easy to do this, because nothing is encrypted, and everyone uses services that expose metadata (like who is IM'ing who).
Your comment is entirely true. However, it presents an adversary that doesn't want dragnet, but targeted surveillance. It assumes that Bob will be immediately arrested if his communications become encrypted.
This is not the threat model that we're faced with now. Let's say you and Bob communicate using accounts you've made on random XMPP servers using Tor, and all the messages are encrypted with OTR. Both servers are in the US, and the NSA's metadata database shows [email protected] sending lots of ciphertext to [email protected].
This is "NSA-proof" in that the NSA would not know to link [email protected] with you using their existing systems. They would have to drastically escalate the cost of their surveillance program with respect to you and Bob to figure out what you're talking about. Unless you really are a political dissident, conspiracy theorist who accidentally discovered the UN's black helicopter program, or radical Islamist, you are now out of the surveillance dragnet.
That is to say, unless the threat model changes, using privacy-enhancing technology will keep your data safe from PRISM and similar dragnet programs.
That's the old mentality: "NSA/CIA/FBI/UGA will actively spy on me."
The way I see it, that's not the greatest danger right now. Instead, we should be worried about the government being able to passively spy on everyone at the same time, by indiscriminately siphoning and analyzing data.
However, according to https://en.wikipedia.org/wiki/Perfect_forward_secrecy OTR does provide "perfect forward secrecy as well as deniable encryption". Doesn't that provide some protection against rubber-hose cryptanalysis?
One thing widespread encryption would do is make it impossible for the NSA to just slurp the combined textual output of humanity into hadoop and mapreduce over it.
They can use "hitting the suspect with a wrench" cryptanalysis on a solo victim, but not on a crowd.
If we accept all claims to be true, that the NSA does have a PRISM program, and is able to get data from Google, Microsoft, Yahoo, Facebook, etc., and we also accept the claims from those companies that they have provided no 'direct access' to their systems, then perhaps SSL is broken?
There's no need to throw him in jail. They can just install a hidden camera and record him typing the password. Next time he's out shopping the "maid" will drop by and the copy the hard drive.
> The passphrase can be brute-forced significantly more easily than breaking the encryption itself. Furthermore, as xkcd so accurately pointed out, a hostile government will throw you in prison (or, worse, hit you repeatedly with a wrench) until you divulge your passphrase and data.
Not to detract from the point of your post, but for anyone interested, that's what TrueCrypt's 'plausible deniability' feature [1] is for. It can be used to create a hidden volume on your hard drive with a different password from your main volume, so if you're ever forced to give up the disk passphrase by a government agency or anyone else, you can give them the password to the hidden volume, and (in theory) you'll appear to be fully cooperating. It is impossible (short of cracking the main volume passphrase through brute force) to prove, given only the passphrase to the hidden volume, that the main volume exists. Ideally, you'd probably want to put something "embarrassing" but legal on the hidden volume (e.g., gay porn), to make the "plausible deniability" for using full disk encryption more "plausible".
> Probably not. The next step that government would take would be to raid your friend Bob's apartment, arrest him, and take his hard disk. His OTR key (and, if using Pidgin, account credentials in plaintext if stored) is plainly available on the disk. You now have the private key.
The DEFINING FEATURE of OTR is that of forward secrecy; key compromise does not permit retroactive decryption.
Otherwise, we could just use TLS. (Technically, we could now, just enforcing the EDH modes.)
Weren't there cases of the government forcing people to give them their passphrase in the US already? Somehow I seem to remember something like this very vaguely.
> The passphrase can be brute-forced significantly more easily than breaking the encryption itself.
Doesn't brute forcing this depend on the strength of the passphrase? For large enough N, if neither can be done in the next N years, does it really matter if it's significantly easier? Isn't there a non-negligible likelihood that in the next N years we'll figure out ways to break stronger forms of encryption but we won't figure out how to brute force strong passphrases efficiently?
Doesn't truecrypt use PBKDF2 or similar? In which case (assuming a good password) it would still be uncrackable in any practical sense.
Besides , in such an example the government would have to be suspecting bob already on some other grounds. In the case of a despotic regime they probably already have him in prison.
These edge cases are stupid. If 99% of Americans are safe, that's good enough. Dudes that actually break the law are not our problem, government should get them. The problem is all the other people who are doing nothing wrong other than have opinions or beliefs that the government does not like. Or in the case now are simply sending e-mails. If we can make it harder to get their shit that's enough.
This whole idea that no we can't do it, is defeatist. We can and we should do it, and then we should do more. As much as we can to make it as hard as possible. And if the government still wants to do shit, then let them. That's their prerogative.
I have a question that perhaps a cryptography expert could answer for me.
My father told me when he was young, he visited Oak Ridge National Labs on a trip, and while there, they told him they had satellites that could read the print on a newspaper. At the time, it wasn't classified information; it was just something that nobody knew. Approximately 15-20 years later, satellites with that capability became well-known. This indicates to me that top secret technology is probably somewhere around 15-20 years ahead of what the general public knows about. This may be less true today than it was back then since nowadays the equipment and factories to develop state-of-the-art technology run in the billions of dollars.
Where I'm going with this: is it reasonable to assume that "future technology" 20 years from now could crack AES-256 or PGP? If so, it seems reasonable to me that the NSA could already crack today's encryption for high-priority data. Add that to the fact that they tend to hire the very best experts in the field (mathematicians and cryptographers) and it doesn't seem entirely unreasonable to me that their decryption technologies are pretty good. Of course, I'm not talking about better technology in a brute-force sense; it would still be impossible to crack 256-bit encryption. I'm talking about algorithmic weaknesses.
But then again, I have only a basic knowledge of cryptography. Would any experts like to comment?
This has come up in the past on HN. As I understand it the newspaper story is bull. As for advancements in technology the answer is likely no - producing that technology requires an entire toolchain/industry that the NSA is unlikely to replicate with its size. The only shot the NSA has at pulling ahead of us is with entirely mathematical things like crypto (which they did at least in the 70s with differential cryptanalysis). With math you can simply hire a bunch of smart people and throw them in a room together which is much less capital intensive than the massive, fundamental research needed to advance technology ahead of the industry.
> This indicates to me that top secret technology is probably somewhere around 15-20 years ahead of what the general public knows about.
Err, maybe in optics at the time, but you can't just generalize like this. You can't consistently be ahead of everything all the time. More than likely the NSA suffers under Moore's law like everyone else.
"Where I'm going with this: is it reasonable to assume that "future technology" 20 years from now could crack AES-256 or PGP?"
There are a few related issues here, and so the answer is a bit complicated.
The only evidence for the security of AES is heuristic, based on testing the output of the cipher to check for properties that secure block ciphers should have. Some new attack strategy could completely undermine AES. Similarly, PGP relies on block ciphers and hash functions that are based on such evidence.
On the other hand, public key cryptography has proofs of security under certain assumptions about the complexity of certain problems. A proof that P != NP is necessary to prove that PKE is secure, but it is not sufficient on its own and we do not even have that much.
Now, assuming that (a) the heuristic evidence for AES and various hash functions is a reliable indicator of security and (b) that the assumptions are computation complexity are correct, then both AES and PGP can be used essentially indefinitely. The reason is that your key size can continue to increase -- for AES, you can iterate the cipher (e.g. "triple AES"), and for PGP you can keep making your keys larger (16384-bit ElGamal?), and you will always be able to stay ahead of your opponent. There are issues with this approach, of course -- it would take a lot of computing power to actually use 16384-bit ElGamal, and eventually it would become impractical, which is why there is so much interest in elliptic curve crypto (which allows shorter keys to be used for the same level of security).
So the answer is, "Yes, from one perspective, No from the other."
> My father told me when he was young, he visited Oak Ridge National Labs on a trip, and while there, they told him they had satellites that could read the print on a newspaper.
It might be easy to find out. Just check when the NSA cancelled their newspaper subscriptions.
There's a general question among cryptographers: How far ahead of the general public is the NSA?
The answer: We don't know. Interestingly, back in the early 90s they were maybe 15ish years ahead of us. Decisions they made back then weren't understood until the 2000s. However, the gap may have been closed somewhat as of late. The public found flaws in SHA-1 (an algorithm by the NSA) in the 2000s that we believe the NSA didn't actually know about yet.
A general note on cryptography security margins (not really an answer, sorry, just some thoughts): The margins are designed to take into account future advances in technology. The community chooses the problems with the most conservative, best understood parameters, and that seem to be the least-likely to experience a break-through. As well, it tends to pick things with margins like "40+ years", meaning that even if we continue to improve our attacks and computing power at the same rate we have been, it is generally expected that it'll take at least 40 years before we're good enough to break it. (Obviously the experts turn out to be wrong sometimes.)
The community thinks very long term and tries to avoid ever picking anything that is very possible to be broken in just 20 years. It takes (in rough terms) at least 5 just to get through the review process and get the algorithm into standards, another 5 to get it into wide-spread usage, and another 5 to transition away from it. So a very popular crypto algorithm probably needs at least 15 years from its introduction run its life cycle, assuming theres no lull of happiness where it's just existing as a secure, commonly used standard. So there's little point in introducing any algorithm that is anything but a very low likelihood of being broken in 20, because it would spend only a fraction of it's lifespan serving as a widely-used, secure standard.
Last point: Breaks tend to be slow, and big breaks are usually smelled very far in advance. When we're 10 or so years out from a big break, we have a good chance of knowing it and we can start to transition away. Someone 20 years ahead of us may very well only find big breaks just before we start to migrate away from the algorithm.
Due to that, we might optimistically (optimism has no place in crypto, I know ;-)) think that an adversary 20 years ahead of us has relatively minimal advantage.
> they had satellites that could read the print on a newspaper.
This is physically impossible, for the reasons given by marssaxman below; specifically, the resolution of an imaging system is limited by diffraction. In order to read a newspaper from orbit, you would need a ridiculously large aperture. Furthermore, you've certainly seen declassified Cold War satellite and aerial (U-2) imagery. You know what it looks like. Do you seriously believe they had something else that could read newspapers?
Assume that over some time frame a crypto solution (the entire system matters as it's very easy to use encryption insecurely) will be compromised. You are essentially buying yourself time. So the question is how much time do you need to buy yourself.
Ok, so even if you select a key length of something very large which depends not only on bytes of the key but also the encryption algorithm as well. For a 4096 bit key that would be 1.0443888814131525066917527107166e+1233 combinations, assuming someone tried to brute force this at 100000 checks per second (a low estimate) it would take roughly 1.655867708988382335571652572800330388413185326368886003... × 10^1220 years to crack on average assuming the birthday paradox.
2^4096 / 2 / 100000 / 60 / 60 / 24 / 365
So that's a freaking long time to keep that data secure. Even radically scaling up the brute force attack across the entire world would be akin to boiling the oceans. (Not going to do out the cpu/watt/check number calculation to determine how much energy it would actually take compared to boiling the oceans...)
So are you safe? No because starting in WWII very smart mathematicians were finding ways to crack the algorithms and find patterns and holes in the encryption solutions that made the search space orders of magnitude smaller. So the best thing we can do is select well attacked, well researched but still secure systems, use a good key length and pray (I am not a religious man).
Edit: If you wish to be truly paranoid (don't recommend it), most of the important crypto research has been done by state organizations, this is how AES was selected from a group of submitted designs to NIST. Conversely there are few still secure and well researched algorithms besides AES out there, (Elliptic Curve basically, but those designs are under patents so not widely available etc)
Edit 2: Also wikipedia is a great starting point for understanding, but is not always complete. Still haven't seen it probably explain initialization vector or nounces before.
From the article: "And while most types of software get more user-friendly over time, user-friendly cryptography seems to be intrinsically difficult. Experts are not much closer to solving the problem today than they were two decades ago."
I'm not sure I agree that user-friendly cryptography is "intrinsically difficult." It doesn't seem like it would be hard for email clients and even the Gmail frontend to pop up a message saying, "Your email is insecure. To let people send you private messages securely, set up your 'public key' now. It's easy." Then a short wizard would walk users through the process and automatically append the public key to all outgoing messages.
On the other side, if you were going to send a message to a friend, the email client would check if that person has published a public key and then ask, "The recipient allows secure messages. Would you like us to send this message securely?"
Google and Microsoft and other large companies are no strangers to implementing a feature and using their size and clout to quickly make it a de facto standard. The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data and provide new services on top of that (and the article mentions this too.)
"The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data"
Jeremy Kun recently wrote a good article summarizing some recent advances in encryption that make your statement somewhat less-than-entirely-accurate (scan for "differential privacy"):
And where is the private key stored? On Google or Microsoft's server? What then would be the point? (I assume you'll answer that it'll be done client-side, but JavaScript cryptography is a whole mess of fail. But that's a separate issue.)
Not so. Nothing stops Linux distros defaulting to mail and file systems that encrypt everything by default, but the reality is that most people can't be bothred. I certainly can't, and I don't feel like putting myself out to encrypt everything in order to make it popular. As has been pointed out, the metadata of who you email and phone, while not probative in the same fashion as the contents of calls and emails, are nevertheless a significant source of data, and encryption won't alter that without major changes to the architecture of mail.
Intelligence-community-proof is somewhat of a fallacy. You can make it more expensive for the NSA to get your email, because then you're forcing them (or another arm of the government) to penetrate your client and extract the key there.
And, all you have done is make damn sure they keep your metadata records. Somewhere I read that sending encrypted email is an automatic flag, in the same category as using words that incite violence.
So to truly make it effective, encrypted email has to be the norm, not the exception.
That sounds unlikely, encrypted email is relatively common in business. For example many domain registrars used it as a mechanism for changing domain settings before "APIs" were a thing.
That's the question about all this thing that I don't know how to anwer. Just extend it a bit...
There are OSs that won't give root access to the NSA, encryption that the NSA won't be able to read and cloud services that the NSA won't be able to access even with cooperation of the CEO. Why none of them are widely used?
And I don't accept the answer on the article as suficient. Yes, a few things are harder when you want any level of security, but not all. There are plenty of applications where security just won't disturb you (like VoIP), and plent of places that put security above all other concerns and should care about this (like non-US military). Yet, nearly nobody chooses the secure path.
Because the vast majority of people like privacy in theory but not enough to spend the hour it would take to learn how to encrypt their email and documents.
Seriously, how many HN users have spent hours complaining about privacy on here but still don't encrypt their own email? This isn't to excuse anything illegal the US gov't might be doing, but if it matters as much to people as they say you'd think they'd have at least taken some immediate action.
>Seriously, how many HN users have spent hours complaining about privacy on here but still don't encrypt their own email?
I would think that most HN users would be willing to encrypt their email, but know they can't convince their friends/family/etc to do so. Encryption takes two to tango.
Let me explain some of my travails trying to use PGP with Thunderbird:
The install of T-Bird wasn't too bad
The install of OpenPGP was not easy but I managed it. The instructions on the site were not all that clear and for an out-of-date version, but YouTube helped out a lot. My mom, the business owners, or a computer science teacher at Central High School simply do not have time to do this. This could be streamlined.
The making of keys and storing of data was totally obtuse, fortunately, the wizard guided me through a lot of it. This could be streamlined.
Now sending a message is where it gets tough. OpenPGP says that I have to use [shift]+"left-click" on the Write button in T-bird to make sure the html won't be used so the PGP message will de-crypted correctly. This is non-sense. Why is this happening?
Ok now assuming I have a plain text email I have to hit [ctrl]+[shift]+[s] and [ctrl]+[shift]+[e] to sign and encrypt. BS. This needs to be better. Just a pop-up and type in the pass-phrase (brilliant wording, btw, phrase makes this so clear it has to be many words long my mom can understand this).
Ok now my buddy can't read it because I did not send him a public key? What the hell are those? Why do I care? I thought I put in my pass-phrase? Didn't he? What is going on?
I sort this out, I find the public key and send it over. Now he can read it. But wait I have another buddy that I have to do this with. Where were those options in the menus again?
There needs to be a button that remembers if I sent the public key to them, sends it if I did not, and then automatically tells their email client that I don't have theirs and gets theirs with permission from them.
Awww, fuck it... the NSA can probably crack this anyway.
It's even worse than you think, because several of the things you said in there don't actually make sense. It sounds like you possibly didn't manage to get the message encrypted at all, just signed.
And how you exchange public keys matters a great deal -- if you just send them over email, you haven't actually achieved any meaningful security.
So yes. The entire process is a usability nightmare.
How timely. This NSA fiasco has prompted me to finish up an old project https://boxuptext.com/, which is a convenient webapp to encrypt message to url entirely on the browser. It's ready for use.
Not many people use crypto because in general it's hard to set up and hard to use. A webapp is accessible and easy to use and provide reasonable security.
I know there's a prevailing view against doing crypto in Javascript, and I've gone the extra steps to address the negatives. At the end I think the benefits of doing javascript on the browser outweigh the negatives. See https://boxuptext.com/faq#benefits
Its not about convenience.
its about money. Like everything really.
Using GPG/PGP for example (which IMO is the best solution) is nice. It has a good, convenient design.
The clients, UI, etc are terrible. Theyre extremely inconvenient.
That can be fixed. This needs some time and a little dedication.
Nobody will pay for a product that has proper, easy, fast PGP support across the board. Nobody.
Since it's not a trivial task, and the benefits are "only" privacy, it didn't happen yet.
If anything, people re-code their own, incompatible and generally lesser version of PGP, because they will get financial gain, or popularity from it (patching GPG doesn't give you as much popularity as making your own, you see.. and we're quite ego-driven / NIH-happy)
So, here we are. And I'm to blame too, I haven't worked on this either.
I'm secretly hoping things like PRISM will actually help making this move forward.
The main point for me is to have a government I don't need to protect myself from. And more generally, a society where I don't need to disguise my every action. Points about the impracticality of strong encryption are secondary. Here are some of them anyway:
* The vast majority of internet users don't have the domain knowledge needed to use strong encryption effectively. A classic example with e-mail is using a prominent phrase from the plain text of the message body in the (unencrypted) subject field.
* Any cryptography scheme is vulnerable to social engineering, attacks on the trust networks used to exchange keys, etc. Avoiding these requires a nontrivial and ongoing amount of effort even for expert users.
* Encryption complicates archival and search of content even for its author.
* Any service that would help users with the above would be legally obligated to provide information to authorities anyway.
> Experts are not much closer to solving the problem [of user-friendly cryptography] today than they were two decades ago.
I disagree. Recently there have been breakthroughs in homomorphic encryption. From Wikipedia [1]:
"...any circuit can be homomorphically evaluated, effectively allowing the construction of programs which may be run on encryptions of their inputs to produce an encryption of their output. Since such a program never decrypts its input, it can be run by an untrusted party without revealing its inputs and internal state."
While currently known constructions with the right mathematical properties are kind of slow, I'm sure that a lot of people are now interested and in the future we'll eventually be able to do it at practical speeds (especially with the help of future computers that are faster, and/or have more cores, and/or have dedicated coprocessors hard-wired for homomorphic encryption computations, like recent x86 chips have hardware accelerated AES [2]).
If this happens, websites will be able to implement features, like search, that rely on manipulation of user data, without having access to that data themselves.
> [Certain] features depend on Facebook’s servers having access to a person’s private data
Today this is true, at least for people who aren't on the cutting edge of research in this field. But it might not be true tomorrow, if homomorphic encryption ever becomes practical (both in terms of fast algorithms, and in terms of frameworks/libraries which make it easy for developers to use).
Off-topic remark: Homomorphic encryption will also impact the economics of cloud computing, since you'll be able to use CPU cycles provided by others without the security concerns of disclosing the unencrypted confidential data you want them to manipulate.
We do. We use it in the browser, for communicating between client and server. Service providers use it internally, for storing messages on disk.
The problem is not one of operations; the problem is one of law. Google (and others) have been forced under federal law to provide the plaintext to the government, or have their individual persons face jail time.
This is not a technological problem, and there are no technological solutions.
Hmm...the writer of the article almost echoes the exact same narrative of Defcon 18 Changing threats to privacy - Moxie Marlinspike http://youtu.be/eG0KrT6pBPk
[+] [-] david_shaw|12 years ago|reply
Yup. Except it's not that easy.
Let's say that you're using OTR to provide very strong end-to-end encryption for a conversation between yourself and a buddy, Bob. Maybe he's in a hostile area, and you're worried that if his government sniffs his traffic, that he could be executed for speaking to Americans.
Data in transit that is intercepted, if configured correctly, is almost certainly safe. No one will be able to immediately decrypt it because of the strong encryption.
So are you safe?
Probably not. The next step that government would take would be to raid your friend Bob's apartment, arrest him, and take his hard disk. His OTR key (and, if using Pidgin, account credentials in plaintext if stored) is plainly available on the disk. You now have the private key.
But what if he used Truecrypt or PGP full-disk encryption? His data would be safe from decryption then, right?
Sort of. If they're trying to break the actual encryption, they'd likely be unable to do so. Unfortunately, the weak point for Truecrypt disks or volumes isn't the crypto... it's the passphrase. The passphrase can be brute-forced significantly more easily than breaking the encryption itself. Furthermore, as xkcd so accurately pointed out, a hostile government will throw you in prison (or, worse, hit you repeatedly with a wrench) until you divulge your passphrase and data.
Encryption is great, and I encourage everyone to use reliably strong crypto. Will that keep your data safe from the criminals that stole your work laptop? Absolutely. Will it keep your data safe from the NSA? You're kidding yourself.
[+] [-] tedks|12 years ago|reply
The article is in response to a dragnet surveillance program, where everyone's communications are watched and presumably datamined. It's very easy to do this, because nothing is encrypted, and everyone uses services that expose metadata (like who is IM'ing who).
Your comment is entirely true. However, it presents an adversary that doesn't want dragnet, but targeted surveillance. It assumes that Bob will be immediately arrested if his communications become encrypted.
This is not the threat model that we're faced with now. Let's say you and Bob communicate using accounts you've made on random XMPP servers using Tor, and all the messages are encrypted with OTR. Both servers are in the US, and the NSA's metadata database shows [email protected] sending lots of ciphertext to [email protected].
This is "NSA-proof" in that the NSA would not know to link [email protected] with you using their existing systems. They would have to drastically escalate the cost of their surveillance program with respect to you and Bob to figure out what you're talking about. Unless you really are a political dissident, conspiracy theorist who accidentally discovered the UN's black helicopter program, or radical Islamist, you are now out of the surveillance dragnet.
That is to say, unless the threat model changes, using privacy-enhancing technology will keep your data safe from PRISM and similar dragnet programs.
[+] [-] CodeMage|12 years ago|reply
The way I see it, that's not the greatest danger right now. Instead, we should be worried about the government being able to passively spy on everyone at the same time, by indiscriminately siphoning and analyzing data.
[+] [-] tlrobinson|12 years ago|reply
However, according to https://en.wikipedia.org/wiki/Perfect_forward_secrecy OTR does provide "perfect forward secrecy as well as deniable encryption". Doesn't that provide some protection against rubber-hose cryptanalysis?
[+] [-] JulianMorrison|12 years ago|reply
They can use "hitting the suspect with a wrench" cryptanalysis on a solo victim, but not on a crowd.
[+] [-] bmelton|12 years ago|reply
http://www.forbes.com/sites/andygreenberg/2013/03/13/cryptog...
[+] [-] silvestrov|12 years ago|reply
[+] [-] jdonahue|12 years ago|reply
Not to detract from the point of your post, but for anyone interested, that's what TrueCrypt's 'plausible deniability' feature [1] is for. It can be used to create a hidden volume on your hard drive with a different password from your main volume, so if you're ever forced to give up the disk passphrase by a government agency or anyone else, you can give them the password to the hidden volume, and (in theory) you'll appear to be fully cooperating. It is impossible (short of cracking the main volume passphrase through brute force) to prove, given only the passphrase to the hidden volume, that the main volume exists. Ideally, you'd probably want to put something "embarrassing" but legal on the hidden volume (e.g., gay porn), to make the "plausible deniability" for using full disk encryption more "plausible".
[1] http://www.truecrypt.org/docs/?s=plausible-deniability
[+] [-] sneak|12 years ago|reply
The DEFINING FEATURE of OTR is that of forward secrecy; key compromise does not permit retroactive decryption.
Otherwise, we could just use TLS. (Technically, we could now, just enforcing the EDH modes.)
[+] [-] kelnos|12 years ago|reply
Not true, actually. OTR provides perfect forward secrecy. Gaining access to the private key does not give you access to previous conversations.
[+] [-] tsahyt|12 years ago|reply
[+] [-] foobarbazqux|12 years ago|reply
Doesn't brute forcing this depend on the strength of the passphrase? For large enough N, if neither can be done in the next N years, does it really matter if it's significantly easier? Isn't there a non-negligible likelihood that in the next N years we'll figure out ways to break stronger forms of encryption but we won't figure out how to brute force strong passphrases efficiently?
[+] [-] jiggy2011|12 years ago|reply
Besides , in such an example the government would have to be suspecting bob already on some other grounds. In the case of a despotic regime they probably already have him in prison.
[+] [-] lukifer|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] gwgarry|12 years ago|reply
This whole idea that no we can't do it, is defeatist. We can and we should do it, and then we should do more. As much as we can to make it as hard as possible. And if the government still wants to do shit, then let them. That's their prerogative.
[+] [-] Xcelerate|12 years ago|reply
My father told me when he was young, he visited Oak Ridge National Labs on a trip, and while there, they told him they had satellites that could read the print on a newspaper. At the time, it wasn't classified information; it was just something that nobody knew. Approximately 15-20 years later, satellites with that capability became well-known. This indicates to me that top secret technology is probably somewhere around 15-20 years ahead of what the general public knows about. This may be less true today than it was back then since nowadays the equipment and factories to develop state-of-the-art technology run in the billions of dollars.
Where I'm going with this: is it reasonable to assume that "future technology" 20 years from now could crack AES-256 or PGP? If so, it seems reasonable to me that the NSA could already crack today's encryption for high-priority data. Add that to the fact that they tend to hire the very best experts in the field (mathematicians and cryptographers) and it doesn't seem entirely unreasonable to me that their decryption technologies are pretty good. Of course, I'm not talking about better technology in a brute-force sense; it would still be impossible to crack 256-bit encryption. I'm talking about algorithmic weaknesses.
But then again, I have only a basic knowledge of cryptography. Would any experts like to comment?
[+] [-] lordgilman|12 years ago|reply
[+] [-] drzaiusapelord|12 years ago|reply
Err, maybe in optics at the time, but you can't just generalize like this. You can't consistently be ahead of everything all the time. More than likely the NSA suffers under Moore's law like everyone else.
[+] [-] betterunix|12 years ago|reply
There are a few related issues here, and so the answer is a bit complicated.
The only evidence for the security of AES is heuristic, based on testing the output of the cipher to check for properties that secure block ciphers should have. Some new attack strategy could completely undermine AES. Similarly, PGP relies on block ciphers and hash functions that are based on such evidence.
On the other hand, public key cryptography has proofs of security under certain assumptions about the complexity of certain problems. A proof that P != NP is necessary to prove that PKE is secure, but it is not sufficient on its own and we do not even have that much.
Now, assuming that (a) the heuristic evidence for AES and various hash functions is a reliable indicator of security and (b) that the assumptions are computation complexity are correct, then both AES and PGP can be used essentially indefinitely. The reason is that your key size can continue to increase -- for AES, you can iterate the cipher (e.g. "triple AES"), and for PGP you can keep making your keys larger (16384-bit ElGamal?), and you will always be able to stay ahead of your opponent. There are issues with this approach, of course -- it would take a lot of computing power to actually use 16384-bit ElGamal, and eventually it would become impractical, which is why there is so much interest in elliptic curve crypto (which allows shorter keys to be used for the same level of security).
So the answer is, "Yes, from one perspective, No from the other."
[+] [-] megablast|12 years ago|reply
It might be easy to find out. Just check when the NSA cancelled their newspaper subscriptions.
[+] [-] B-Con|12 years ago|reply
The answer: We don't know. Interestingly, back in the early 90s they were maybe 15ish years ahead of us. Decisions they made back then weren't understood until the 2000s. However, the gap may have been closed somewhat as of late. The public found flaws in SHA-1 (an algorithm by the NSA) in the 2000s that we believe the NSA didn't actually know about yet.
A general note on cryptography security margins (not really an answer, sorry, just some thoughts): The margins are designed to take into account future advances in technology. The community chooses the problems with the most conservative, best understood parameters, and that seem to be the least-likely to experience a break-through. As well, it tends to pick things with margins like "40+ years", meaning that even if we continue to improve our attacks and computing power at the same rate we have been, it is generally expected that it'll take at least 40 years before we're good enough to break it. (Obviously the experts turn out to be wrong sometimes.)
The community thinks very long term and tries to avoid ever picking anything that is very possible to be broken in just 20 years. It takes (in rough terms) at least 5 just to get through the review process and get the algorithm into standards, another 5 to get it into wide-spread usage, and another 5 to transition away from it. So a very popular crypto algorithm probably needs at least 15 years from its introduction run its life cycle, assuming theres no lull of happiness where it's just existing as a secure, commonly used standard. So there's little point in introducing any algorithm that is anything but a very low likelihood of being broken in 20, because it would spend only a fraction of it's lifespan serving as a widely-used, secure standard.
Last point: Breaks tend to be slow, and big breaks are usually smelled very far in advance. When we're 10 or so years out from a big break, we have a good chance of knowing it and we can start to transition away. Someone 20 years ahead of us may very well only find big breaks just before we start to migrate away from the algorithm.
Due to that, we might optimistically (optimism has no place in crypto, I know ;-)) think that an adversary 20 years ahead of us has relatively minimal advantage.
[+] [-] ErsatzVerkehr|12 years ago|reply
This is physically impossible, for the reasons given by marssaxman below; specifically, the resolution of an imaging system is limited by diffraction. In order to read a newspaper from orbit, you would need a ridiculously large aperture. Furthermore, you've certainly seen declassified Cold War satellite and aerial (U-2) imagery. You know what it looks like. Do you seriously believe they had something else that could read newspapers?
[+] [-] mey|12 years ago|reply
Ok, so even if you select a key length of something very large which depends not only on bytes of the key but also the encryption algorithm as well. For a 4096 bit key that would be 1.0443888814131525066917527107166e+1233 combinations, assuming someone tried to brute force this at 100000 checks per second (a low estimate) it would take roughly 1.655867708988382335571652572800330388413185326368886003... × 10^1220 years to crack on average assuming the birthday paradox.
2^4096 / 2 / 100000 / 60 / 60 / 24 / 365
So that's a freaking long time to keep that data secure. Even radically scaling up the brute force attack across the entire world would be akin to boiling the oceans. (Not going to do out the cpu/watt/check number calculation to determine how much energy it would actually take compared to boiling the oceans...)
So are you safe? No because starting in WWII very smart mathematicians were finding ways to crack the algorithms and find patterns and holes in the encryption solutions that made the search space orders of magnitude smaller. So the best thing we can do is select well attacked, well researched but still secure systems, use a good key length and pray (I am not a religious man).
Edit: If you wish to be truly paranoid (don't recommend it), most of the important crypto research has been done by state organizations, this is how AES was selected from a group of submitted designs to NIST. Conversely there are few still secure and well researched algorithms besides AES out there, (Elliptic Curve basically, but those designs are under patents so not widely available etc)
Edit 2: Also wikipedia is a great starting point for understanding, but is not always complete. Still haven't seen it probably explain initialization vector or nounces before.
Oh and one last thing http://xkcd.com/538/
See
https://en.wikipedia.org/wiki/Key_size
https://en.wikipedia.org/wiki/Brute-force_search
https://blogs.oracle.com/dcb/entry/zfs_boils_the_ocean_consu...
https://en.wikipedia.org/wiki/World_War_II_cryptography
https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
https://en.wikipedia.org/wiki/Elliptic_curve_cryptography
[+] [-] rorrr2|12 years ago|reply
[+] [-] aasarava|12 years ago|reply
I'm not sure I agree that user-friendly cryptography is "intrinsically difficult." It doesn't seem like it would be hard for email clients and even the Gmail frontend to pop up a message saying, "Your email is insecure. To let people send you private messages securely, set up your 'public key' now. It's easy." Then a short wizard would walk users through the process and automatically append the public key to all outgoing messages.
On the other side, if you were going to send a message to a friend, the email client would check if that person has published a public key and then ask, "The recipient allows secure messages. Would you like us to send this message securely?"
Google and Microsoft and other large companies are no strangers to implementing a feature and using their size and clout to quickly make it a de facto standard. The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data and provide new services on top of that (and the article mentions this too.)
[+] [-] jordan0day|12 years ago|reply
Jeremy Kun recently wrote a good article summarizing some recent advances in encryption that make your statement somewhat less-than-entirely-accurate (scan for "differential privacy"):
http://jeremykun.com/2013/06/10/why-theoretical-computer-sci...
[+] [-] maaku|12 years ago|reply
[+] [-] anigbrowl|12 years ago|reply
[+] [-] bdamm|12 years ago|reply
And, all you have done is make damn sure they keep your metadata records. Somewhere I read that sending encrypted email is an automatic flag, in the same category as using words that incite violence.
So to truly make it effective, encrypted email has to be the norm, not the exception.
[+] [-] jiggy2011|12 years ago|reply
[+] [-] Vivtek|12 years ago|reply
[+] [-] marcosdumay|12 years ago|reply
There are OSs that won't give root access to the NSA, encryption that the NSA won't be able to read and cloud services that the NSA won't be able to access even with cooperation of the CEO. Why none of them are widely used?
And I don't accept the answer on the article as suficient. Yes, a few things are harder when you want any level of security, but not all. There are plenty of applications where security just won't disturb you (like VoIP), and plent of places that put security above all other concerns and should care about this (like non-US military). Yet, nearly nobody chooses the secure path.
[+] [-] crazy1van|12 years ago|reply
Seriously, how many HN users have spent hours complaining about privacy on here but still don't encrypt their own email? This isn't to excuse anything illegal the US gov't might be doing, but if it matters as much to people as they say you'd think they'd have at least taken some immediate action.
[+] [-] unimpressive|12 years ago|reply
I would think that most HN users would be willing to encrypt their email, but know they can't convince their friends/family/etc to do so. Encryption takes two to tango.
[+] [-] Balgair|12 years ago|reply
Let me explain some of my travails trying to use PGP with Thunderbird:
The install of T-Bird wasn't too bad
The install of OpenPGP was not easy but I managed it. The instructions on the site were not all that clear and for an out-of-date version, but YouTube helped out a lot. My mom, the business owners, or a computer science teacher at Central High School simply do not have time to do this. This could be streamlined.
The making of keys and storing of data was totally obtuse, fortunately, the wizard guided me through a lot of it. This could be streamlined.
Now sending a message is where it gets tough. OpenPGP says that I have to use [shift]+"left-click" on the Write button in T-bird to make sure the html won't be used so the PGP message will de-crypted correctly. This is non-sense. Why is this happening?
Ok now assuming I have a plain text email I have to hit [ctrl]+[shift]+[s] and [ctrl]+[shift]+[e] to sign and encrypt. BS. This needs to be better. Just a pop-up and type in the pass-phrase (brilliant wording, btw, phrase makes this so clear it has to be many words long my mom can understand this).
Ok now my buddy can't read it because I did not send him a public key? What the hell are those? Why do I care? I thought I put in my pass-phrase? Didn't he? What is going on?
I sort this out, I find the public key and send it over. Now he can read it. But wait I have another buddy that I have to do this with. Where were those options in the menus again?
There needs to be a button that remembers if I sent the public key to them, sends it if I did not, and then automatically tells their email client that I don't have theirs and gets theirs with permission from them.
Awww, fuck it... the NSA can probably crack this anyway.
[+] [-] ef4|12 years ago|reply
And how you exchange public keys matters a great deal -- if you just send them over email, you haven't actually achieved any meaningful security.
So yes. The entire process is a usability nightmare.
[+] [-] TheCondor|12 years ago|reply
It's a thunderbird plugin and its pretty good
[+] [-] ww520|12 years ago|reply
Not many people use crypto because in general it's hard to set up and hard to use. A webapp is accessible and easy to use and provide reasonable security.
I know there's a prevailing view against doing crypto in Javascript, and I've gone the extra steps to address the negatives. At the end I think the benefits of doing javascript on the browser outweigh the negatives. See https://boxuptext.com/faq#benefits
[+] [-] zobzu|12 years ago|reply
Using GPG/PGP for example (which IMO is the best solution) is nice. It has a good, convenient design. The clients, UI, etc are terrible. Theyre extremely inconvenient. That can be fixed. This needs some time and a little dedication. Nobody will pay for a product that has proper, easy, fast PGP support across the board. Nobody. Since it's not a trivial task, and the benefits are "only" privacy, it didn't happen yet. If anything, people re-code their own, incompatible and generally lesser version of PGP, because they will get financial gain, or popularity from it (patching GPG doesn't give you as much popularity as making your own, you see.. and we're quite ego-driven / NIH-happy)
So, here we are. And I'm to blame too, I haven't worked on this either. I'm secretly hoping things like PRISM will actually help making this move forward.
[+] [-] beefman|12 years ago|reply
* The vast majority of internet users don't have the domain knowledge needed to use strong encryption effectively. A classic example with e-mail is using a prominent phrase from the plain text of the message body in the (unencrypted) subject field.
* Any cryptography scheme is vulnerable to social engineering, attacks on the trust networks used to exchange keys, etc. Avoiding these requires a nontrivial and ongoing amount of effort even for expert users.
* Encryption complicates archival and search of content even for its author.
* Any service that would help users with the above would be legally obligated to provide information to authorities anyway.
[+] [-] rsync|12 years ago|reply
I don't trust SSL, for various reasons of implementation and many, many questions about weak links in the PKI chain, etc.
But I rely on SSH. I'd like very much to see some kind of assurance that this is a reasonable thing to rely on...
[+] [-] csense|12 years ago|reply
I disagree. Recently there have been breakthroughs in homomorphic encryption. From Wikipedia [1]:
"...any circuit can be homomorphically evaluated, effectively allowing the construction of programs which may be run on encryptions of their inputs to produce an encryption of their output. Since such a program never decrypts its input, it can be run by an untrusted party without revealing its inputs and internal state."
While currently known constructions with the right mathematical properties are kind of slow, I'm sure that a lot of people are now interested and in the future we'll eventually be able to do it at practical speeds (especially with the help of future computers that are faster, and/or have more cores, and/or have dedicated coprocessors hard-wired for homomorphic encryption computations, like recent x86 chips have hardware accelerated AES [2]).
If this happens, websites will be able to implement features, like search, that rely on manipulation of user data, without having access to that data themselves.
> [Certain] features depend on Facebook’s servers having access to a person’s private data
Today this is true, at least for people who aren't on the cutting edge of research in this field. But it might not be true tomorrow, if homomorphic encryption ever becomes practical (both in terms of fast algorithms, and in terms of frameworks/libraries which make it easy for developers to use).
Off-topic remark: Homomorphic encryption will also impact the economics of cloud computing, since you'll be able to use CPU cycles provided by others without the security concerns of disclosing the unencrypted confidential data you want them to manipulate.
[1] http://en.wikipedia.org/wiki/Homomorphic_encryption#Fully_ho...
[2] http://en.wikipedia.org/wiki/AES_instruction_set
[+] [-] sneak|12 years ago|reply
The problem is not one of operations; the problem is one of law. Google (and others) have been forced under federal law to provide the plaintext to the government, or have their individual persons face jail time.
This is not a technological problem, and there are no technological solutions.
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] rasengan0|12 years ago|reply
[+] [-] gasull|12 years ago|reply
[+] [-] aleclarsoniv|12 years ago|reply
Um... what? Can't the user just reset his/her password, instead of a website emailing him/her the old password?...
[+] [-] kimlelly|12 years ago|reply
The FIRST question is: Is it really a solution?
The answer to that: NO, see: https://news.ycombinator.com/item?id=5879308