How does this compare to the work done by the Netherlands group (Hulsing/Zimmermann, etc al) [1] and the Kudeldki group from Switzerland (Raynal/Genet/Romailler) [2]?
It's nice to see someone making this more available. I had thought about trying to push the pq-wg implementation from Kudelski group to wg or trying it out, but I never had the time. Rust implementation seems to be an improvement of implementation, but I don't know about the underlying proofs.
We are based on the work of Hülsing, Ning, Zimmermann, Weber and Schwabe and in contact with the group. I havn't heard about kuedlki; the slides seem to refer to a reimplementation of the 2020 paper and the go implementation has been stale for two years. They also seem to use a tweaked krystals implementation which I would not trust. The link to the blog post they refer to dis dead.
The Rosenpass protocol builds on the 2020 paper but also adds security against state disruption attacks (CVE-2021-46873). The implementation is actively maintained, written un Rust not in go. We use Classic McEliece and Kyber from the OQS library.
There is a confusion about terminology here I think. Mathematical proofs including cryptography proofs use models simplifying reality; i.e. the real practical system might still be susceptible to attacks despite a proof of security.
For crypto primitives (classic mc eliece, curve25519, ed25519, RSA, etc etc) the standard for proofs is currently showing that they are as hard as some well studied mathematical problem. This is done by showing that an attack on the primitive leads to an attack on the underlying mathematical primitive. The proof for Diffie-Hellman shows that attacking DH leads to an efficient solution for the discrete log problem. I.e. the proof is a reduction to the underlying primitive.
No primitive is perfectly secure (at least a brute force – i.e. guessing each possibility is possible); there is some probability that the adversary can guess the right key. We call this probability the adversary's advantage. One task in cryptoanalysis is to find better attacks against primitives with a higher advantage; if an attack with a polynomial time average runtime is found, the primitive is broken. Finding a higher non-polynomial attack is still an interesting result.
The standard for protocols is proving that the protocol is secure assuming the primitives are secure; since multiple primitives are used you basically get a formula deriving an advantage for breaking the entire protocol. The proof is a reduction to a set of primitives.
We did not build a proof in that gold standard, although we are working on it. We built a proof in the symbolic model – known as a symbolic analysis. This uses the perfect cryptography assumption; i.e. we assumed that the advantages for each primitive are zero. Google "Dolev-Yao-Model".
This makes the proof much easier; a proof assistant such as ProVerif can basically find a proof automatically using logic programming methods (horn clauses).
The definitions of security are fairly well understood; unfortunately there is a lot to go into so I can't expand on that here. Looking up "IND-CPA" and "IND-CCA" might be a good start; these are the security games/models of security for asymmetric encryption; you could move on to the models for key exchange algorithms there. Reading the [noise protocol spec](https://noiseprotocol.org/) is also a good start.
(my answer is general and not specific to this submission)
The question is too broad to be answered, there are many different formal verification techniques (including static formal verification techniques, and also dynamic formal verification techniques which happen at runtime), and you could be formally verifying only specific properties of the system.
Now, if your formal verification technique forces you to check that each index you use is within bounds (for instance, by forcing you to write loop invariants for each loop, but that's not sufficient because you can index buffers outside a loop or with something unrelated to the loop invariant), then yes, you have proved that you will not overflow buffers.
But those pesky implementations are always imperfect and never totally proved correct, what's more they run on pesky hardware which could have flaws and which is usually not itself perfectly verified, so…
And then you have model checking, which is also a formal verification technique. You can prove that you won't overflow buffers… in the model (which is a spec). That proves that your spec is sound and that you can implement it without flaws, but it does not actually check that your implementation is correct, of course. Unless your model checking tool can also build the implementation and this feature is proved correct.
edit: it seems my model checking paragraph is more relevant than I expected, this submission is actually about model checking if it checks the protocol (and not the implementation).
As with any application a small risk of critical security issues (such as buffer overflows, remote code execution) exists; the Rosenpass application is written in the Rust programming language which is much less prone to such issues.
I think their formal analysis is only security/crypto related, at least for the time being.
Maybe not in general, but there is really interesting work happening in the area of verified protocols generating safe C code (memory safe, overflow safe, timing safe, etc). In particular Project Everest has a set of verified primitives underlying higher level protocols like TLS and more recently all Noise variants (https://eprint.iacr.org/2022/607).
As WireGuard is based on a Noise construction, it seems reasonable to hope that once formally verified PQ primitives are in place, a fully verified protocol implementation could be generated?
The code signing project I worked on was formally verified. The whole time they were verifying it I found horrible bugs that needed fixing. One even made it past soft launch. So great job boys but this doesn’t help me.
I have not been following this closely but I thought most all of the quantum safe algorithms that had been proposed so far had been found lacking for traditional attacks very soon after they where held up as a standard contender. Has this changed?
nope, that is not correct. NIST has elected Kyber as one of the algorithms to standardize and we are using that.
As other commenters mentioned (very good info there, thank you all!) the other algorithm we use – Classic McEliece – is one of the oldest algorithms and has been well studied. There is no known efficient attack against it.
One of the KEMS they've elected to (McEliece) has been around since the 70's, and has arguably been studied more than the others. If you're not quite sure about lattices, I've heard it called the "conservative choice" for a PQ KEM.
Using a PSK alone doesn't make WireGuard quantum-safe. The security of the key exchange mechanism in WireGuard, which relies on the Diffie-Hellman protocol, is still vulnerable to quantum attacks.
If an attacker were to obtain the PSK and use a quantum computer to break the Diffie-Hellman key exchange, they would be able to decrypt the VPN traffic.
This is currently the thought-process and main reason behind why PQWG (Post Quantum Wireguard) are actively being researched [1].
That's what they're doing, generating a key using post-quantum crypto and using it as the PSK - from TFA: "The tool establishes a symmetric key and provides it to WireGuard. Since it supplies WireGuard with key through the PSK feature using Rosenpass+WireGuard is cryptographically no less secure than using WireGuard on its own ("hybrid security"). Rosenpass refreshes the symmetric key every two minutes."
You still can. The problem arises when you don't actually wanna pre-share the key, and you still want post-quantum forward secrecy. Then you need a PQ KEM like McEliece or Kyber to run a PQ-secure key establishment.
That doesn't solve the key management problem, it just defines it as out-of-scope, since you still need to exchange that preshared key outside Wireguard.
It's important to point out that these programs have two different objectives.
The Mullvad client is designed to connect to a closed-source service, which is run by someone else. It supports a bunch of different plugins, including openvpn and WireGuard. So probably it could adopt rosenpass, at least with its WG plugin.
WireGuard is designed for minimal protocol variability, high assurance implementations, and ultra small code size. It's used by VPN services, but also by end-users creating their own tunnels.
I'm too stupid to understand the crypto technicalities. Is this really a good solution? Or embrace, extend, extinguish targeted on Wireguard?
The paper abstract mentions a "cookie"-like concept, and from websec I know that cookies are not always the optimal solution and historically cookie implementations had a lot of attack surface.
EDIT: Seems to come from German Max-Planck Institute which is funded by German government.
"Lacking a reliable way to detect retransmission, we remove the replay protection mechanism and store the responder state in an encrypted cookie called “the biscuit” instead. Since the responder does not store any session-dependent state until the initiator is interactively authenticated, there is no state to disrupt in an attack."
Both WG and PQWG are vulnerable to state disruption attacks; they rely on a timestamp to protect against replay of the first protocol message. An attacker who can tamper with the local time of the protocol initiator can inhibit future handshakes, rendering the initiator’s static keypair practically useless.
The use of the insecure NTP protocol is the reason for the "cookie" / "Biscuit" mechanism.
Rosenpass author here; I myself am independent, thus funding by NLNet. We have some project participants who are Freelancers; two of my co-authors are employed at research institutes. One of my co authors is employed at MPI-SP.
The cookie thing is a defense against WireGuard CVE-2021-46873; the attack is in my view not bad enough to get rid of the WireGuard protocol. WG is still the standard for pre-quantum VPN implementations. Rosenpass also needs to use post-quantum crypto-primitives that need a lot of cpu and memory resources.
Rosenpass and WireGuard work together; Rosenpass runs in userspace and gives keys to WireGuard so we do not plan to replace it any time.
It would be possible to apply the biscuit mechanism to classical WireGuard; unfortunately that would cause a protocol incompatibility. I am not sure if they are going to take that path.
[+] [-] chaxor|3 years ago|reply
[1] https://eprint.iacr.org/2020/379.pdf [2] https://csrc.nist.gov/CSRC/media/Presentations/pq-wireguard-... [3] https://github.com/kudelskisecurity/pq-wireguard
[+] [-] sevenoftwelve|3 years ago|reply
The Rosenpass protocol builds on the 2020 paper but also adds security against state disruption attacks (CVE-2021-46873). The implementation is actively maintained, written un Rust not in go. We use Classic McEliece and Kyber from the OQS library.
[+] [-] yellow_lead|3 years ago|reply
How can you prove this? There is still no mathematical proof that i.e discrete log is NP complete
edit - I see it's a WIP but even the definition of secure seems difficult
[+] [-] sevenoftwelve|3 years ago|reply
There is a confusion about terminology here I think. Mathematical proofs including cryptography proofs use models simplifying reality; i.e. the real practical system might still be susceptible to attacks despite a proof of security.
For crypto primitives (classic mc eliece, curve25519, ed25519, RSA, etc etc) the standard for proofs is currently showing that they are as hard as some well studied mathematical problem. This is done by showing that an attack on the primitive leads to an attack on the underlying mathematical primitive. The proof for Diffie-Hellman shows that attacking DH leads to an efficient solution for the discrete log problem. I.e. the proof is a reduction to the underlying primitive.
No primitive is perfectly secure (at least a brute force – i.e. guessing each possibility is possible); there is some probability that the adversary can guess the right key. We call this probability the adversary's advantage. One task in cryptoanalysis is to find better attacks against primitives with a higher advantage; if an attack with a polynomial time average runtime is found, the primitive is broken. Finding a higher non-polynomial attack is still an interesting result.
The standard for protocols is proving that the protocol is secure assuming the primitives are secure; since multiple primitives are used you basically get a formula deriving an advantage for breaking the entire protocol. The proof is a reduction to a set of primitives.
We did not build a proof in that gold standard, although we are working on it. We built a proof in the symbolic model – known as a symbolic analysis. This uses the perfect cryptography assumption; i.e. we assumed that the advantages for each primitive are zero. Google "Dolev-Yao-Model".
This makes the proof much easier; a proof assistant such as ProVerif can basically find a proof automatically using logic programming methods (horn clauses).
The definitions of security are fairly well understood; unfortunately there is a lot to go into so I can't expand on that here. Looking up "IND-CPA" and "IND-CCA" might be a good start; these are the security games/models of security for asymmetric encryption; you could move on to the models for key exchange algorithms there. Reading the [noise protocol spec](https://noiseprotocol.org/) is also a good start.
[+] [-] dcow|3 years ago|reply
[+] [-] Tepix|3 years ago|reply
[+] [-] jraph|3 years ago|reply
The question is too broad to be answered, there are many different formal verification techniques (including static formal verification techniques, and also dynamic formal verification techniques which happen at runtime), and you could be formally verifying only specific properties of the system.
Now, if your formal verification technique forces you to check that each index you use is within bounds (for instance, by forcing you to write loop invariants for each loop, but that's not sufficient because you can index buffers outside a loop or with something unrelated to the loop invariant), then yes, you have proved that you will not overflow buffers.
But those pesky implementations are always imperfect and never totally proved correct, what's more they run on pesky hardware which could have flaws and which is usually not itself perfectly verified, so…
And then you have model checking, which is also a formal verification technique. You can prove that you won't overflow buffers… in the model (which is a spec). That proves that your spec is sound and that you can implement it without flaws, but it does not actually check that your implementation is correct, of course. Unless your model checking tool can also build the implementation and this feature is proved correct.
edit: it seems my model checking paragraph is more relevant than I expected, this submission is actually about model checking if it checks the protocol (and not the implementation).
[+] [-] sevenoftwelve|3 years ago|reply
We are investigating ways how to do more formal verification for the implementation itself.
[+] [-] ekiwi|3 years ago|reply
This is still a pretty neat result! End-to-end proofs from high level protocol to low level implementation are mostly still a research topic.
[+] [-] l-lousy|3 years ago|reply
As with any application a small risk of critical security issues (such as buffer overflows, remote code execution) exists; the Rosenpass application is written in the Rust programming language which is much less prone to such issues.
I think their formal analysis is only security/crypto related, at least for the time being.
[+] [-] thadt|3 years ago|reply
As WireGuard is based on a Noise construction, it seems reasonable to hope that once formally verified PQ primitives are in place, a fully verified protocol implementation could be generated?
[+] [-] rurban|3 years ago|reply
[+] [-] ape4|3 years ago|reply
[+] [-] hinkley|3 years ago|reply
[+] [-] wstuartcl|3 years ago|reply
[+] [-] jrexilius|3 years ago|reply
https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
[+] [-] sevenoftwelve|3 years ago|reply
nope, that is not correct. NIST has elected Kyber as one of the algorithms to standardize and we are using that.
As other commenters mentioned (very good info there, thank you all!) the other algorithm we use – Classic McEliece – is one of the oldest algorithms and has been well studied. There is no known efficient attack against it.
[+] [-] d-z-m|3 years ago|reply
[+] [-] madspindel|3 years ago|reply
[+] [-] gzer0|3 years ago|reply
If an attacker were to obtain the PSK and use a quantum computer to break the Diffie-Hellman key exchange, they would be able to decrypt the VPN traffic.
This is currently the thought-process and main reason behind why PQWG (Post Quantum Wireguard) are actively being researched [1].
[1] https://ieeexplore.ieee.org/document/9519445/
[+] [-] miloignis|3 years ago|reply
[+] [-] sevenoftwelve|3 years ago|reply
We are :) Rosenpass is a fancy way of generating a PSK for WireGuard.
[+] [-] d-z-m|3 years ago|reply
[+] [-] CodesInChaos|3 years ago|reply
[+] [-] foobiekr|3 years ago|reply
Because key exchange and key rotation is a huge problem.
[+] [-] Fnoord|3 years ago|reply
[+] [-] eternalban|3 years ago|reply
https://raw.githubusercontent.com/rosenpass/rosenpass/papers...
[+] [-] themodelplumber|3 years ago|reply
[+] [-] rogers18445|3 years ago|reply
https://github.com/mullvad/mullvadvpn-app/tree/main/talpid-t...
[+] [-] KateLawson|3 years ago|reply
The Mullvad client is designed to connect to a closed-source service, which is run by someone else. It supports a bunch of different plugins, including openvpn and WireGuard. So probably it could adopt rosenpass, at least with its WG plugin.
WireGuard is designed for minimal protocol variability, high assurance implementations, and ultra small code size. It's used by VPN services, but also by end-users creating their own tunnels.
[+] [-] inChargeOfIT|3 years ago|reply
[+] [-] _joel|3 years ago|reply
[+] [-] _joel|3 years ago|reply
[+] [-] hummus_bae|3 years ago|reply
[deleted]
[+] [-] bflesch|3 years ago|reply
The paper abstract mentions a "cookie"-like concept, and from websec I know that cookies are not always the optimal solution and historically cookie implementations had a lot of attack surface.
EDIT: Seems to come from German Max-Planck Institute which is funded by German government.
[+] [-] gzer0|3 years ago|reply
"Lacking a reliable way to detect retransmission, we remove the replay protection mechanism and store the responder state in an encrypted cookie called “the biscuit” instead. Since the responder does not store any session-dependent state until the initiator is interactively authenticated, there is no state to disrupt in an attack."
Both WG and PQWG are vulnerable to state disruption attacks; they rely on a timestamp to protect against replay of the first protocol message. An attacker who can tamper with the local time of the protocol initiator can inhibit future handshakes, rendering the initiator’s static keypair practically useless.
The use of the insecure NTP protocol is the reason for the "cookie" / "Biscuit" mechanism.
[+] [-] sevenoftwelve|3 years ago|reply
The cookie thing is a defense against WireGuard CVE-2021-46873; the attack is in my view not bad enough to get rid of the WireGuard protocol. WG is still the standard for pre-quantum VPN implementations. Rosenpass also needs to use post-quantum crypto-primitives that need a lot of cpu and memory resources.
Rosenpass and WireGuard work together; Rosenpass runs in userspace and gives keys to WireGuard so we do not plan to replace it any time.
It would be possible to apply the biscuit mechanism to classical WireGuard; unfortunately that would cause a protocol incompatibility. I am not sure if they are going to take that path.
[+] [-] Fnoord|3 years ago|reply
On the Github repo it says:
"Supported by
Funded through NLNet with financial support for the European Commission's NGI Assure program."
On the website it says:
"Funded through NLnet with financial support from the European Commission's NGI Assure program."
[+] [-] wucke13|3 years ago|reply