Intel SGX/remote attestation for verifying that servers are running the code they say they are running is very interesting, I believe Signal talked about doing something similar for contact discovery, but at a base level it requires a lot of trust. How do I verify that the attestation I receive back is the one of the machine I am contacting? Can I know for sure that this isn't a compromised SGX configuration, since the system has been broken in the past? Furthermore, can I really be sure that I can trust SGX attestations if I can't actually verify SGX itself? Even if the code running under SGX is verifiable, as an ordinary user it's basically impossible to tell if there are bugs that would make it possible to compromise.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
Intel will not attest insecure configurations. Our client will automatically verify the attestation it receives to make sure the certificate isn't expired and has a proper signature under Intel's CA trust.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
I'm a huge fan of the technical basis for this. I want services to attest themselves to me so I can verify that they're running the source code I can inspect. And, well, the combination of founders here? Good fucking lord. I'm really fascinated to see whether we can generate enough trust in the code to be able to overcome the complete lack of trust that these people deserve. I can't imagine a better way to troll me on this point.
>the complete lack of trust that these people deserve
Yeah, I took one look at that and laughed. CEO of mt gox teaming up with the guy who sold his last VPN to an Israeli spyware company sounds like the start of a joke.
The SGX TCB isn’t large enough to protect the really critical part of a private VPN: the source and destination of packets. Nothing stops them from sticking a user on their own enclave and monitoring all the traffic in-and-out.
Also, the README is full of AI slop buzzwords, which isn’t confidence-inspiring.
You also have to trust that SGX isn't compromised.
But even without that, you can log what goes into SGX and what comes out of SGX. That seems pretty important, given that the packets flowing in and out need to be internet-routable and necessarily have IP headers. Their ISP could log the traffic, even if they don't.
> Packet Buffering and Timing Protection: A 10ms flush interval batches packets together for temporal obfuscation
That's something, I guess. I don't think 10ms worth of timing obfuscation gets you very much though.
> This temporal obfuscation prevents timing correlation attacks
This is a false statement. It makes correlation harder but correlation is a statistical relationship. The correlations are still there.
All that said, it is better to use SGX than to not use SGX, and it is better to use timing obfuscation than to not. Just don't let the marketing hype get ahead of the security properties!
While I do see the impl of the 10ms flush interval, I don't see any randomisation within batches. So iiuc, packets are still flushed in their original order.
One of the many reasons I love Mullvad (been using it for 4 years now) is their simple pricing—$5/month whether you subscribe monthly, yearly, or even 10 years out.
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
Until crypto is legally treated like cash (e.g. I don't have to report that I bought a beer with a $20 bill from an ATM), I don't think it's a very satisfying solution to have to either 1. Report to the IRS that I bought a VPN with monero or 2. Commit a tax crime and be paranoid about the IRS using automated tools to find you out for years after each transaction.
Even ignoring that elephant inthe room, how do you regularly (to pay subscription) get the crypto without leaving a paper trail or dealing with sketchy people?
I like virtual cards like privacy.com. If a state actor is after you, they will find you. So the typical threat model to me is companies trying to track you, like your ISP/Google/Facebook.
It would be nice if there was some way to be tax compliant and get the privacy benefits of monero though. Am I missing some crypto tax compliance tooling here or are all of these crypto payment users just poking the IRS bear?
Okay I don't have much information about this whole attestation flow and one question boggles my mind. If someone can explain this in simple terms, I'd be thankful:
The post says build the repo and get the fingerprint, which is fine. Then it says compare it to the fingerprint that vp.net reports.
My question is: how do I verify the server is reporting the fingerprint of the actual running code, and not just returning the (publicly available) fingerprint that we get result of building the code in the first place?
"Ask a VP.NET server for the fingerprint it reports" is a little bit simplistic. The process for actually doing this involves you handing the server a random number, and it sending you back a signed statement including both the fingerprint and the random number you gave it. This prevents it simply reporting a fixed fingerprint statement every time someone asks. The second aspect of this is that the key used to sign the statement has a certificate chain that ties back to Intel, and which can be proven to be associated with an SGX enclave. Assuming you trust Intel, the only way for something to use this key to sign such a statement is for it to be a true representation of what that CPU is running inside SGX at the time.
> how do I verify the server is reporting the fingerprint of the actual running code
Since this was answered already, I'll just say that I think the bigger problem is that we can't know if the machine that replied with the fingerprint from this code is even related to the one currently serving your requests.
Someone had a comment here that just disappeared, mentioning it's by Mark Karpelès (yes, the same guy from MtGox) and Andrew Lee. Why did that remark get deleted?
The people who were convicted of multi-million dollar fraud resulting in someone walking away with millions of dollars of others bitcoin deposits, IIRC (https://en.m.wikipedia.org/wiki/Mark_Karpel%C3%A8s if you want to check details).
Also, I couldn't see where it is based? Anywhere in Five-Eyes countries, or places like USA with national security letters (or just their fascist government) is probably not going to fit most people's that models.
They claim to allow anonymous sign up and payments, but requires an email,an account, zip code and name for Crypto payments, but fake info could be used I guess. I tried ordering via Crypto, but it constantly gives me this error: "Unable to load order information. Try again".
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
> Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
The chief privacy officer of the company is the moron that destroyed Freenode. Of course, Libera lives on, but it is a transition we could’ve done without.
Freenode was sold to me by Christel, the previous owner. I did not even offer to purchase it and simply assumed I was doing what I had been doing for a decade for freenode and many other FOSS projects - keeping them alive. It was my funds that did so the whole time for freenode (and a number of other projects which I stopped funding thereafter given the death threats I was receiving which led to the end of many of them unfortunately).
The Libera staff [1] attempted to steal the domain because they wanted control. None of the staff were developers at the time and complained they couldn’t even write their own irc client. Think of Mozilla. The people who run it aren’t the coders. Same thing.
Here are the receipts for every statement I just made:
PS: Freenode seems more active then Libera where everyone is just idle (bots?) but that is another point. See for yourself with the client I wrote: IRC.com.
[1] By Libera staff I mean the former freenode staffers who left to form Libera. These are the same people I spent a lot of money helping to protect legally from the allegations made by “OldCoder”
These VPN's for privacy are so bad. You give your credit card (verified identity), default gateway and payload to foreign soil and feel safe. On top of that your packets clear text metadata verifies you with cryptographic accuracy.
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
The US government might be able to pressure Intel into doing something with SGX, but there are way too many eyes on this for it to go unnoticed in my opinion, especially considering SGX has been around for so long and messed with by so many security researchers.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
Well, if by against governments you mean against enforcement of regional IP protection then yes. The major use case of VPN is geo projection and torrenting, not high falutin' privacy the good guys depend on. The 2nd major use case is avoiding crypto KYC.
What does the verifiable program do though? With a VPN, what I'm concerned about is my traffic not being sniffed and analyzed. This code seem to have something to do with keys but it's not clear how that helps...?
This is the server-side part of things. It receives encrypted traffic from your (and other customers) device, and routes it to the Internet.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
This is cool, and I'm glad to see someone doing this, but I also feel obligated to mention that you can also just quickly deploy your own VPN server that only you have access to with AlgoVPN: https://github.com/trailofbits/algo
Where are you gonna deploy it? The ingress and egress IP will lead back to you.
Also unless you are running on a fully trusted stack (TPM or other attestation), you don't in fact know that only you have access. This is hard. `Quickly` isn't a thing.
How does this attestation work? How can I be sure that this isn't just returning the fingerprint I expect without actually running in an enclave at all? Does Intel sign those messages?
Similar to TLS, the attestation includes a signature and a x509 certificate with a chain of trust to Intel's CA. The whole attestation is certified by Intel to be valid and details such as the enclave fingerprint (MRENCLAVE) are generated by the CPU to be part of the attestation.
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
> Build the code we published, get the fingerprint it produces, ask a VP.NET server for the fingerprint it reports, and compare the two. If they match, the server is running the exact code you inspected. No trust required.
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
Intel SGX comes with an attestation process aiming at exactly that. The attestation contains a number of details, such as the hardware configuration (cpu microcode version, BIOS, etc) and the hash of the enclave code. At system startup the CPU gets a certificate from Intel confirming the configuration is known safe, which is used by the CPU to in turn certify the enclave is indeed running code with a given fingerprint.
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
Cute idea. Bit worried about the owners here; rasengan doesn't have a stellar reputation after what happened with Freenode.
The idea itself is sound: if there are no SGX bypasses (hardware keys dumped, enclaves violated, CPU bugs exploited, etc.), and the SGX code is sound (doesn't leak the private keys by writing them to any non-confidential storage, isn't vulnerable to timing-based attacks, etc.), and you get a valid, up-to-date attestation containing the public key that you're encrypting your traffic with plus a hash of a trustworthy version of the SGX code, then you can trust that your traffic is indeed being decrypted inside an SGX enclave which has exclusive access to the private key.
Obviously, that's a lot of conditions. Happily, you can largely verify those conditions given what's provided here; you can check that the attestation points to a CPU and configuration new enough to not have any (known) SGX breaks; you can check that the SGX code is sound and builds to the provided hash (exercise left to the reader); and you can check the attestation itself as it is signed with hardware keys that chain up to an Intel root-of-trust.
However! An SGX enclave cannot interface with the system beyond simple shared memory input/output. In particular, an SGX enclave is not (and cannot be) responsible for socket communication; that must be handled by an OS that lies outside the SGX TCB (Trusted Computing Base). For typical SGX use-cases, this is OK; the data is what is secret, and the socket destinations are not.
For a VPN, this is not true! The OS can happily log anything it wants! There's nothing stopping it from logging all the data going into and out of the SGX enclave and performing traffic correlation. Even with traffic mixing, there's nothing stopping the operators from sticking a single user onto their own, dedicated SGX enclave which is closely monitored; traffic mixing means nothing if its just a single user's traffic being mixed.
So, while the use of SGX here is a nice nod to privacy, at the end of the day, you still have to decide whether to trust the operators, and you still cannot verify in an end-to-end way whether the service is truly private.
I funded freenode since 2011 so any narrative that makes it seem I just appeared out of nowhere is factually untrue. Also, I was handed it because Christel felt I was a good custodian thereof. Instead, former staff who I protected from allegations made by OldCoder for years, went on to form Libera, tried to steal the domain for a developers irc network when they themselves shockingly couldn’t even code a simple irc client, and then made up a false narrative.
The state of open source generally isn’t what you think and you would do well for yourself to read Lunduke’s Journal among other things. The developers don’t actually run most of the projects these days. Look at Mozilla.
You can't be serious. If you attempt to signup, it will ask for a name and even more, such as a zip code if using cryptocurrency. Take a look at their about page and see the absolute clowns running this joke of a service. Avoid these monsters at all costs.
Really could care less about another VPN provider; I yearn for a FOSS VPN implementation that has the operational simplicity of dsvpn [1] but operates over QUIC/MASQUE. I did look into doing this myself by contorting the Cloudflare quiche server, but I put it down due to time, something I hope to revisit soon.
This is hot garbage IMO. Admittedly I haven't looked past the intro page, and comments here. Plus I have a ton of experience in this area (although have never actually operated a consumer VPN or ToR or what have you).
I think the comments here about SGX trust are misguided. This isn't protecting you from deep state chip level intentional bypasses. We can at least have reasonable enough assurance in SGX per se. The average law enforcement isn't going to get to your data because of some undisclosed SGX issue.
But unlike AMZ Nitro, which AIUI has a network stack which bypasses the guest OS (I believe hypervisor can see everything, which I would trust about the same as SGX), SGX requires host/guest support to pass network packets. So in Nitro you can operate the TCB entirely without (unverified, unattested) guest OS seeing anything? But in SGX the guest has to pass traffic back and forth to SGX. The difference here is who operates the untrusted bit. For SGX it's the application author themselves.
That is why you need the 10ms batching, to stop the host/guest from matching src/dst pairs, and inspecting the outbound traffic (inbound is presumably encrypted for the TEE). However, batching is laughable and won't stop correlation (unless you inject significant fake traffic, which the host/guest has to not be able to tell is fake).
So like every other VPN this is marketing of snake oil.
Compare that to express or whoever it is that offers static IP within Nitro. That is way more useful than this pretend security. (Use of Nitro allows them to not know what static IP is assigned to you, so they can't be compelled to give that info up.)
MASQUE (Apple Private Relay) or other double-blind VPNs are better and don't require SGX.
Besides the technical inadequacies, you have the double whammy of PIA and MtGox heritage. oh my.
[+] [-] jchw|6 months ago|reply
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
[+] [-] MagicalTux|6 months ago|reply
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
[+] [-] junon|6 months ago|reply
As far as I'm aware, no. Any network protocol can be spoofed, with varying degrees of difficulty.
I would love to be wrong.
[+] [-] mjg59|6 months ago|reply
[+] [-] pydry|6 months ago|reply
Yeah, I took one look at that and laughed. CEO of mt gox teaming up with the guy who sold his last VPN to an Israeli spyware company sounds like the start of a joke.
[+] [-] nneonneo|6 months ago|reply
Also, the README is full of AI slop buzzwords, which isn’t confidence-inspiring.
[+] [-] commandersaki|6 months ago|reply
[+] [-] rasengan|6 months ago|reply
Trusting random internet people is actually the biggest “troll” of the internet.
Any VPN that asks you to trust their guarantees and not the guarantees of code is selling you snake oil and should not be trusted.
Trust is not a feature in security. Thus, we removed and replaced it with code based guarantees.
[+] [-] Retr0id|6 months ago|reply
You also have to trust that SGX isn't compromised.
But even without that, you can log what goes into SGX and what comes out of SGX. That seems pretty important, given that the packets flowing in and out need to be internet-routable and necessarily have IP headers. Their ISP could log the traffic, even if they don't.
> Packet Buffering and Timing Protection: A 10ms flush interval batches packets together for temporal obfuscation
That's something, I guess. I don't think 10ms worth of timing obfuscation gets you very much though.
> This temporal obfuscation prevents timing correlation attacks
This is a false statement. It makes correlation harder but correlation is a statistical relationship. The correlations are still there.
(latter quotes are from their github readme https://github.com/vpdotnet/vpnetd-sgx )
All that said, it is better to use SGX than to not use SGX, and it is better to use timing obfuscation than to not. Just don't let the marketing hype get ahead of the security properties!
[+] [-] Retr0id|6 months ago|reply
While I do see the impl of the 10ms flush interval, I don't see any randomisation within batches. So iiuc, packets are still flushed in their original order.
[+] [-] SamDc73|6 months ago|reply
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
[+] [-] greentea23|6 months ago|reply
Until crypto is legally treated like cash (e.g. I don't have to report that I bought a beer with a $20 bill from an ATM), I don't think it's a very satisfying solution to have to either 1. Report to the IRS that I bought a VPN with monero or 2. Commit a tax crime and be paranoid about the IRS using automated tools to find you out for years after each transaction.
Even ignoring that elephant inthe room, how do you regularly (to pay subscription) get the crypto without leaving a paper trail or dealing with sketchy people?
I like virtual cards like privacy.com. If a state actor is after you, they will find you. So the typical threat model to me is companies trying to track you, like your ISP/Google/Facebook.
It would be nice if there was some way to be tax compliant and get the privacy benefits of monero though. Am I missing some crypto tax compliance tooling here or are all of these crypto payment users just poking the IRS bear?
[+] [-] adikso|6 months ago|reply
[+] [-] kqr|6 months ago|reply
[+] [-] can16358p|6 months ago|reply
The post says build the repo and get the fingerprint, which is fine. Then it says compare it to the fingerprint that vp.net reports.
My question is: how do I verify the server is reporting the fingerprint of the actual running code, and not just returning the (publicly available) fingerprint that we get result of building the code in the first place?
[+] [-] mjg59|6 months ago|reply
[+] [-] ranger_danger|6 months ago|reply
Since this was answered already, I'll just say that I think the bigger problem is that we can't know if the machine that replied with the fingerprint from this code is even related to the one currently serving your requests.
[+] [-] rkagerer|6 months ago|reply
[+] [-] pbhjpbhj|6 months ago|reply
Also, I couldn't see where it is based? Anywhere in Five-Eyes countries, or places like USA with national security letters (or just their fascist government) is probably not going to fit most people's that models.
[+] [-] staplers|6 months ago|reply
[+] [-] aidenn0|6 months ago|reply
[+] [-] unknown|6 months ago|reply
[deleted]
[+] [-] b8|6 months ago|reply
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
[+] [-] rasengan|6 months ago|reply
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
[+] [-] selkin|6 months ago|reply
And worse, it is harder for the American government to eavesdrop on US soil than it is outside America.
Of course, if a national spying apparatus is after you, regardless of the nation, pretty good chance jurisdiction doesn’t matter.
[+] [-] eptcyka|6 months ago|reply
[+] [-] rasengan|6 months ago|reply
Freenode was sold to me by Christel, the previous owner. I did not even offer to purchase it and simply assumed I was doing what I had been doing for a decade for freenode and many other FOSS projects - keeping them alive. It was my funds that did so the whole time for freenode (and a number of other projects which I stopped funding thereafter given the death threats I was receiving which led to the end of many of them unfortunately).
The Libera staff [1] attempted to steal the domain because they wanted control. None of the staff were developers at the time and complained they couldn’t even write their own irc client. Think of Mozilla. The people who run it aren’t the coders. Same thing.
Here are the receipts for every statement I just made:
http://techrights.org/wp-content/uploads/2021/05/lee-side.pd...
PS: Freenode seems more active then Libera where everyone is just idle (bots?) but that is another point. See for yourself with the client I wrote: IRC.com.
[1] By Libera staff I mean the former freenode staffers who left to form Libera. These are the same people I spent a lot of money helping to protect legally from the allegations made by “OldCoder”
[+] [-] exfil|6 months ago|reply
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
[+] [-] jen729w|6 months ago|reply
Old copy? Might need an update.
[+] [-] john01dav|6 months ago|reply
[+] [-] MagicalTux|6 months ago|reply
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
[+] [-] jiveturkey|6 months ago|reply
[+] [-] remram|6 months ago|reply
[+] [-] MagicalTux|6 months ago|reply
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
[+] [-] dguido|6 months ago|reply
[+] [-] jiveturkey|6 months ago|reply
Also unless you are running on a fully trusted stack (TPM or other attestation), you don't in fact know that only you have access. This is hard. `Quickly` isn't a thing.
[+] [-] nugzbunny|6 months ago|reply
I imagine those websites block IP ranges of popular VPN providers.
Am I right in thinking that hosting my own VPN would resolve this issue?
[+] [-] stavros|6 months ago|reply
[+] [-] MagicalTux|6 months ago|reply
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
[+] [-] do_not_redeem|6 months ago|reply
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
[+] [-] MagicalTux|6 months ago|reply
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
[+] [-] rasengan|6 months ago|reply
It's signed by Intel and thus, guaranteed to come from the enclave!
[+] [-] nneonneo|6 months ago|reply
The idea itself is sound: if there are no SGX bypasses (hardware keys dumped, enclaves violated, CPU bugs exploited, etc.), and the SGX code is sound (doesn't leak the private keys by writing them to any non-confidential storage, isn't vulnerable to timing-based attacks, etc.), and you get a valid, up-to-date attestation containing the public key that you're encrypting your traffic with plus a hash of a trustworthy version of the SGX code, then you can trust that your traffic is indeed being decrypted inside an SGX enclave which has exclusive access to the private key.
Obviously, that's a lot of conditions. Happily, you can largely verify those conditions given what's provided here; you can check that the attestation points to a CPU and configuration new enough to not have any (known) SGX breaks; you can check that the SGX code is sound and builds to the provided hash (exercise left to the reader); and you can check the attestation itself as it is signed with hardware keys that chain up to an Intel root-of-trust.
However! An SGX enclave cannot interface with the system beyond simple shared memory input/output. In particular, an SGX enclave is not (and cannot be) responsible for socket communication; that must be handled by an OS that lies outside the SGX TCB (Trusted Computing Base). For typical SGX use-cases, this is OK; the data is what is secret, and the socket destinations are not.
For a VPN, this is not true! The OS can happily log anything it wants! There's nothing stopping it from logging all the data going into and out of the SGX enclave and performing traffic correlation. Even with traffic mixing, there's nothing stopping the operators from sticking a single user onto their own, dedicated SGX enclave which is closely monitored; traffic mixing means nothing if its just a single user's traffic being mixed.
So, while the use of SGX here is a nice nod to privacy, at the end of the day, you still have to decide whether to trust the operators, and you still cannot verify in an end-to-end way whether the service is truly private.
[+] [-] rasengan|6 months ago|reply
That said, the freenode issue was debunked and you can see receipts here: http://techrights.org/wp-content/uploads/2021/05/lee-side.pd...
I funded freenode since 2011 so any narrative that makes it seem I just appeared out of nowhere is factually untrue. Also, I was handed it because Christel felt I was a good custodian thereof. Instead, former staff who I protected from allegations made by OldCoder for years, went on to form Libera, tried to steal the domain for a developers irc network when they themselves shockingly couldn’t even code a simple irc client, and then made up a false narrative.
The state of open source generally isn’t what you think and you would do well for yourself to read Lunduke’s Journal among other things. The developers don’t actually run most of the projects these days. Look at Mozilla.
[+] [-] neurostimulant|6 months ago|reply
Huh, I thought Mark Karpelès is working at Private Internet Access.
From the about page:
> currently head of karpeles labs, a multi faceted research and development firm specializing in highly complex technology systems
I guess he quit to run a competing vpn company?
[+] [-] Ms-J|6 months ago|reply
[+] [-] commandersaki|6 months ago|reply
[1]: https://github.com/jedisct1/dsvpn
[+] [-] jiveturkey|6 months ago|reply
I think the comments here about SGX trust are misguided. This isn't protecting you from deep state chip level intentional bypasses. We can at least have reasonable enough assurance in SGX per se. The average law enforcement isn't going to get to your data because of some undisclosed SGX issue.
But unlike AMZ Nitro, which AIUI has a network stack which bypasses the guest OS (I believe hypervisor can see everything, which I would trust about the same as SGX), SGX requires host/guest support to pass network packets. So in Nitro you can operate the TCB entirely without (unverified, unattested) guest OS seeing anything? But in SGX the guest has to pass traffic back and forth to SGX. The difference here is who operates the untrusted bit. For SGX it's the application author themselves.
That is why you need the 10ms batching, to stop the host/guest from matching src/dst pairs, and inspecting the outbound traffic (inbound is presumably encrypted for the TEE). However, batching is laughable and won't stop correlation (unless you inject significant fake traffic, which the host/guest has to not be able to tell is fake).
So like every other VPN this is marketing of snake oil.
Compare that to express or whoever it is that offers static IP within Nitro. That is way more useful than this pretend security. (Use of Nitro allows them to not know what static IP is assigned to you, so they can't be compelled to give that info up.)
MASQUE (Apple Private Relay) or other double-blind VPNs are better and don't require SGX.
Besides the technical inadequacies, you have the double whammy of PIA and MtGox heritage. oh my.