I'm the cryptographer (lvh on GitHub) providing some context on the issue. I'd be happy to answer any other questions here, although I think the example on the ticket demonstrates the attack pretty well.
The way you generally fix this is by providing appropriate binding between long-term and short-term secret. MQV does this with some weird group-breaking, but another way to do it would be, e.g.:
session_key
= H(l_A*E_B + e_A*L_B) (as computed by Alice)
= H(l_B*E_A + e_B*L_A) (as computer by Bob)
Note: DON'T DO THIS! USE NOISE INSTEAD. This is just for insight; although this how at least one of the KCI-secure AKEs works. H is a secure hash function; l_X is X's long-term secret, e_X is X's ephemeral secret, L_X is X's long-term pubkey, E_X is X's ephemeral pubkey. This works because the attack doesn't simultaneously know e_A and l_B in this model. Off the top of my head, ISTR this is how OTR's AKE works, but don't quote me on that, it's been a while since I looked at OTR specifically. This is by no means exhaustive! There are a handful of secure AKEs like this.
Another classic way to do this is by signing the handshake, but that has its own share of problems. For example, the KCI attacks on TLS worked because an certificate was being used as a static DH key.
I am not saying that as a defense of Tox. More as a question.
Why is KCI a vulnerability? Yes, I understand that the situation is reverse from a normal crypto; but still, when your keys are stolen, you are screwed anyway, your messages can be forged, it can lead to MITM, ...
How is that fundamentally so different from KCI. When someone steals your keys, security doesn't apply anymore; how can a threat model include "but what if someone steals the private keys".
Yes this solution would solve the KCI vulnerability (assuming the + is a commutative operation or you switch the order in one of the hash functions). I just don't see why you are advising against it's use, would be grateful if you explained why.
In addition to being a really succinct and well-written summary of how KCI attacks work and a good motivator for reading on modern AKE constructions like SIGMA work, this is also kind of the best possible Github bug report; in particular, I'm stealing this:
Is this an accurate representation of the handshake? If so, keep reading. If not, you may safely stop reading here, close the issue, and accept my apologies for the misunderstanding.
The part that really got me was that after I had this terrible impression from reading the source, the developer wrote on the bug report:
"We have a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C)."
Oh lord. Then another zinger from the same author:
"Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification."
By actually following the thread, I got the opposite idea. Its current active developers are well aware of Tox's flaws, do actually know what they're doing, and have a plan. They're being addressed.
This is a much better attitude than seen elsewhere (such as in Telegram).
And, in the first place, this is being overblown. As GrayHatter succintly puts it:
> For anyone reading this, without a crypto background. The assertions being made are the same as saying: the lock on your house is broken because if someone steals your keys they can unlock your door.
As a Tox user, I'd love to see some of these cryptographers calling it out actually contribute lines of code. Tox fills a niche. It might not be perfectly secure, but there is also far more to software than the crypto. UI stuff takes a lot of grunt work.
Please, improve this. As a user I want Tox to be secure. But also, as a user, I'll use it rather than Skype even if it does have some concerns. It's a genuine, open source, Skype replacement that more-or-less works. The encryption stuff is just the icing on the cake.
It would be a great project even with zero encryption.
Specific recommendations for a different protocol that does not have this concern were made, and a detailed bug report with repeated explanations of the issue were provided. Why is the onus on me to also go fix the problem, when it's repeated in the issue that the authors are mostly interested in stabilizing the codebase first? (That is not a criticism, but rather: not only do I not feel this is my responsibility, the way I read it, the maintainers don't want that contribution right now.)
Me and a friend of mine are discussing what the actual vulnerability is. What I got from the report was that you can impersonate anyone when talking to A if you have A's key.
He says that is impossible and you need B's key to impersonate B when talking to A.
Could anyone that knows more about this than I do step in to clarify? Thanks!
The trick is this: in a traditional DH based key exchange, you need A) your private key and B) the other person's public key. Let's say your keypair is a/A and my key is b/B (the lowercase identifier is the private key, the uppercase one is the public key).
To do an exchange, you send me A, and I send you B. You compute the DH exchange with B and a, and I compute it with A and b. The important property of the DH function is that computing DH(B,a) is the same thing as computing DH(A,b).
Now, if I steal your secret key 'a', I can do two things. One, I can obviously impersonate you. But two, I can impersonate anyone else, because I can calculate the shared secret. The reason for this is that I need your private key, but I only need the other parties public key.
Let's say you try to communicate with me now and someone has stolen 'a'. They can already have my public key, B, because it is public. Thus, when the exchange happens, you send me A. The attacker knows the secret key, 'a', already. So the attacker can intercept the communication, calculate the DH exchange of 'a' (which they stole) and 'B' (my public key), and they have calculated the shared secret. It is not possible for you to tell you have exchanged with the attacker, and not me. They never need my private key, only my public key, which will come through during the handshake. So it can always be intercepted. Remember, DH(B,a) and DH(A,b) are the same, so if both sides calculate DH(B,a), that's completely legitimate.
This is a very simplified view (and I haven't looked at the bug report in detail, TBQH, so the case for Tox is probably slightly different), but it explains why if your keys are stolen, anyone can be impersonated to you: because the impersonator only needs public information (public keys) from that point on to forge any exchange.
"He says that is impossible and you need B's key to impersonate B when talking to A." Indeed this is what you'd expect out of any secure handshake, right? But that's not the case with the novice Tox one:
The handshake depends on forming a shared secret. They do by computing an ECDH, which has the property:
ECDH(private A, public B) = ECDH(private B, public A)
So, if you control private A, you can form the shared secret that A relies on for verifying anybody else's identity. Reasonable AKEs protect against this; Tox's naive one does not. So, instead of requiring two compromised keys for a man in the middle between one peer pairs (A<-->B), you only need one compromised key for a man in the middle between infinite key pairs (A<-->{everybody}).
So if I'm understanding this correctly, this exploit is only possible if someone gets a hold of your private key? This sounds more like an academic/theoretical worry than anything that would concern the average user. Realistically, if someone has your private key, you are compromised, end of story. If damage mitigation is possible it should definitely be looked into as a matter of principle, but trying to discourage people from using Tox or even from developing it over such a tiny flaw seems like little more than hubris/concern trolling. I'm sure it would look great on your blog or resume to be able to say that you, the crypto expert, found a fatal flaw in a well-established security project that forced it to shut down. Luckily it seems the actual developers of Tox have more common sense than some of these "experts", whose standards of perfection, if ever realized, would see that we all stop using technology altogether.
A lot of these supposed "secure messaging" applications seem to roll their own crypto systems and hope that is secure. And then there users seem to promote the insecure protocol. I think we need to make a new word for this like cryptohipster .
From the FAQ: "No, really, what's Tox? It's a VERY secure Instant Messenger ...."
So they acknowledged this is not true in their issue tracker, but not publicly? Further, they internally claim they don't know what threats it faces, don't understand what those threats are, and don't want to worry at this time about what insecurity they have?
Also: "a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C)" from one of their own developers.
Some background: TokTok was started a year ago and inherited the core code from Tox with the intention to take the project to the place it claims to be in. TokTok is not equal to Tox, and we have no control over what the Tox project presents on its website.
I do think that making such claims without any backing security proofs is quite bold. In TokTok, we've avoided that and instead say: our aim is to bring security software to everyone. We don't claim that the software is already secure, but we have confidence in the general architecture and are working towards provable security. Part of that is a formal specification (not the human language spec) helping us provide users and cryptographers with a security proof. We are aware of some flaws, and have concrete plans (roadmap) to fix them.
I know that saying things like that statement you quoted doesn't improve confidence, but I think that saying "we're very secure" without proof is simply a lie. Many projects out there, some similar to Tox, claim security but aren't actually proven secure.
We are now simply a group of people who believe in the idea and want to make it reality. I am affiliated with TokTok, not with Tox.
Regarding public acknowledgement: I will update the TokTok website shortly.
[+] [-] lvh|9 years ago|reply
The way you generally fix this is by providing appropriate binding between long-term and short-term secret. MQV does this with some weird group-breaking, but another way to do it would be, e.g.:
Note: DON'T DO THIS! USE NOISE INSTEAD. This is just for insight; although this how at least one of the KCI-secure AKEs works. H is a secure hash function; l_X is X's long-term secret, e_X is X's ephemeral secret, L_X is X's long-term pubkey, E_X is X's ephemeral pubkey. This works because the attack doesn't simultaneously know e_A and l_B in this model. Off the top of my head, ISTR this is how OTR's AKE works, but don't quote me on that, it's been a while since I looked at OTR specifically. This is by no means exhaustive! There are a handful of secure AKEs like this.Another classic way to do this is by signing the handshake, but that has its own share of problems. For example, the KCI attacks on TLS worked because an certificate was being used as a static DH key.
[+] [-] runn1ng|9 years ago|reply
Why is KCI a vulnerability? Yes, I understand that the situation is reverse from a normal crypto; but still, when your keys are stolen, you are screwed anyway, your messages can be forged, it can lead to MITM, ...
How is that fundamentally so different from KCI. When someone steals your keys, security doesn't apply anymore; how can a threat model include "but what if someone steals the private keys".
[+] [-] kurolevin|9 years ago|reply
[+] [-] Fnoord|9 years ago|reply
[+] [-] tptacek|9 years ago|reply
Is this an accurate representation of the handshake? If so, keep reading. If not, you may safely stop reading here, close the issue, and accept my apologies for the misunderstanding.
[+] [-] CiPHPerCoder|9 years ago|reply
In contrast, some of the Tox team's response makes me want to take up drinking. The "crypto secret club gimmick" remark in particular.
[+] [-] eeZah7Ux|9 years ago|reply
People, please stay away from Tox.
[+] [-] zx2c4|9 years ago|reply
"We have a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C)."
Oh lord. Then another zinger from the same author:
"Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification."
Stay away!
[+] [-] snvzz|9 years ago|reply
By actually following the thread, I got the opposite idea. Its current active developers are well aware of Tox's flaws, do actually know what they're doing, and have a plan. They're being addressed.
This is a much better attitude than seen elsewhere (such as in Telegram).
And, in the first place, this is being overblown. As GrayHatter succintly puts it:
> For anyone reading this, without a crypto background. The assertions being made are the same as saying: the lock on your house is broken because if someone steals your keys they can unlock your door.
[+] [-] Jiig|9 years ago|reply
[+] [-] doctorfoo|9 years ago|reply
Please, improve this. As a user I want Tox to be secure. But also, as a user, I'll use it rather than Skype even if it does have some concerns. It's a genuine, open source, Skype replacement that more-or-less works. The encryption stuff is just the icing on the cake.
It would be a great project even with zero encryption.
/ramblerant
[+] [-] lvh|9 years ago|reply
[+] [-] SlySherZ|9 years ago|reply
He says that is impossible and you need B's key to impersonate B when talking to A.
Could anyone that knows more about this than I do step in to clarify? Thanks!
[+] [-] aseipp|9 years ago|reply
To do an exchange, you send me A, and I send you B. You compute the DH exchange with B and a, and I compute it with A and b. The important property of the DH function is that computing DH(B,a) is the same thing as computing DH(A,b).
Now, if I steal your secret key 'a', I can do two things. One, I can obviously impersonate you. But two, I can impersonate anyone else, because I can calculate the shared secret. The reason for this is that I need your private key, but I only need the other parties public key.
Let's say you try to communicate with me now and someone has stolen 'a'. They can already have my public key, B, because it is public. Thus, when the exchange happens, you send me A. The attacker knows the secret key, 'a', already. So the attacker can intercept the communication, calculate the DH exchange of 'a' (which they stole) and 'B' (my public key), and they have calculated the shared secret. It is not possible for you to tell you have exchanged with the attacker, and not me. They never need my private key, only my public key, which will come through during the handshake. So it can always be intercepted. Remember, DH(B,a) and DH(A,b) are the same, so if both sides calculate DH(B,a), that's completely legitimate.
This is a very simplified view (and I haven't looked at the bug report in detail, TBQH, so the case for Tox is probably slightly different), but it explains why if your keys are stolen, anyone can be impersonated to you: because the impersonator only needs public information (public keys) from that point on to forge any exchange.
[+] [-] zx2c4|9 years ago|reply
The handshake depends on forming a shared secret. They do by computing an ECDH, which has the property:
ECDH(private A, public B) = ECDH(private B, public A)
So, if you control private A, you can form the shared secret that A relies on for verifying anybody else's identity. Reasonable AKEs protect against this; Tox's naive one does not. So, instead of requiring two compromised keys for a man in the middle between one peer pairs (A<-->B), you only need one compromised key for a man in the middle between infinite key pairs (A<-->{everybody}).
[+] [-] baby|9 years ago|reply
[+] [-] EnjoyTomato|9 years ago|reply
[+] [-] zitterbewegung|9 years ago|reply
[+] [-] kevin_b_er|9 years ago|reply
So they acknowledged this is not true in their issue tracker, but not publicly? Further, they internally claim they don't know what threats it faces, don't understand what those threats are, and don't want to worry at this time about what insecurity they have?
Also: "a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C)" from one of their own developers.
This is quite disappointing.
[+] [-] iphy|9 years ago|reply
I do think that making such claims without any backing security proofs is quite bold. In TokTok, we've avoided that and instead say: our aim is to bring security software to everyone. We don't claim that the software is already secure, but we have confidence in the general architecture and are working towards provable security. Part of that is a formal specification (not the human language spec) helping us provide users and cryptographers with a security proof. We are aware of some flaws, and have concrete plans (roadmap) to fix them.
I know that saying things like that statement you quoted doesn't improve confidence, but I think that saying "we're very secure" without proof is simply a lie. Many projects out there, some similar to Tox, claim security but aren't actually proven secure.
We are now simply a group of people who believe in the idea and want to make it reality. I am affiliated with TokTok, not with Tox.
Regarding public acknowledgement: I will update the TokTok website shortly.
[+] [-] lightedman|9 years ago|reply
Man can make it, man can break it.