(no title)
ohmygodel | 6 years ago
1. Each hidden service chooses a "guard" relay to serve as the first hop for all connections.
2. A server running multiple hidden services has a guard for each of them. Each new guard is another chance to choose a guard run by the adversary.
3. An adversary running a fraction p of the guards (by bandwidth) has a probability p of being chosen by a given hidden service. A hosting service with k hidden services is exposed to k guards and thus has ~kp probability of chosen an adversary's guard. With, say, 50 hidden services, an adversary with only 2% of guards has nearly 100% chance of being chosen by one of those 50 hidden services.
4. The adversary can tell when it is chosen as a guard by connecting to the hidden service as a client and looking for a circuit with the same pattern of communication as observed at the client. Bauer at el. [0] showed a long time ago this worked even using only the circuit construction times.
5. The adversary's guard can observe the hidden service's IP directly.
The risk of deanonymization with onion services in general (i.e. even not using an onion hosting service) is significant against an adversary with some resources and time. Getting 1% of guard bandwidth probably costs <$500/month using IP transit providers (e.g. relay 8ac97a37 currently has 0.3% guard probability with only ~750Mbps [1]). And every month or so a new guard is chosen, yielding another chance to choose an adversarial guard. Not to mention the risk of choosing a guard that isn't inherently malicious but is subject to legal compulsion in a given jurisdiction (discovering the guard of a hidden service has always been and remains quite feasible with little time or money, as demonstrated by Øverlier and Syverson [2]).
[0] "Low-Resource Routing Attacks Against Tor" by Kevin Bauer, Damon McCoy, Dirk Grunwald, Tadayoshi Kohno, and Douglas Sicker. In the Proceedings of the Workshop on Privacy in the Electronic Society (WPES 2007), Washington, DC, USA, October 2007.
[1] <https://metrics.torproject.org/rs.html#details/014E24C0CD21D...
[2] "Locating Hidden Servers" by Lasse Øverlier and Paul Syverson. In the Proceedings of the 2006 IEEE Symposium on Security and Privacy, May 2006.
turc1656|6 years ago
Assuming random assignment/selection of the guards, each time one is chosen it has a 98% chance of not being "caught" by choosing an adversary's guard. Going with 50 services as you said would be .98^50=.364, meaning the chance of getting caught is 1-.364=.635 - 63.5%. This is vastly different than being nearly 100%.
ohmygodel|6 years ago
bvinc|6 years ago
He said something seemed to be dos'ing the guard nodes, causing his service to automatically choose a new guard, in an attempt to get his service to connect to a guard node controlled by the adversary. He said in one case, they found his server's actual IP address and dos'd it.
Could that be what happened?
ohmygodel|6 years ago
[0] http://www.hackerfactor.com/blog/index.php?/archives/868-Dea...
sorenjan|6 years ago
So does the guard know that it is a guard and that the traffic comes from a hidden service? I thought Tor worked by jumping from node to node, and that each node didn't know whether the traffic came from the original client/service or from another node in the chain. So each time you make a connection over Tor you're essentially telling a guard node "here's my real IP, send this traffic to this hidden service and return the response please" and you have to trust that they keep it a secret? I feel like I'm missing something here.
ohmygodel|6 years ago
1. S is at an IP address that is not a public Tor relay as listed in the Tor consensus. It's not impossible that S is a bridge (i.e. private Tor relay), but statistically unlikely because using a bridge isn't all that common.
2. During circuit construction, S extends the circuit beyond R two times. I don't see why Tor couldn't easily create dummy circuit extensions to fool R, but it doesn't (probably because there are so many other indicators that this change alone wouldn't solve the problem).
3. R observes what appear to be HTTP-level request-response pairs between it and S at about the same round-trip time (RTT) as the RTT R observes between it and S at the TCP layer, which should only happen if there were no more hops beyond S.
If I recall correctly, Kwon et al. [0] describe several more statistical indicators of being a guard for an onion service.
Also, you are right that a client doesn't tell the guard node the destination (e.g. the onion service) of its traffic. The guard node is not trusted with that because it already directly observes the client, and so giving it the other side would deanonymize the connection.
[0] https://www.usenix.org/conference/usenixsecurity15/technical...
dontbenebby|6 years ago
You can do fancy attacks all you want, if the server is in Russia they're probably not going to be honoring any MLATs
rolltiide|6 years ago
Its 2020 now so much has to have changed. Tor sucked 7 years ago.
ohmygodel|6 years ago
1. The biggest improvement is that (in 2014 or 2015?) they reduced the number of entry guards from 3 to 1 [0], reducing the risk of a malicious guard by a factor of 3.
2. The time until a guard choice expires was increased from 2–3 months to 3–4 [1] (this maybe happened 3 years ago?). This increases by ~40% the expected time an adversary would need to passively wait to have his relay selected as a guard by a victim.
3. The bandwidth threshold to become a guard relay was raised from 250KB/s to 2000KB/s [2] (looks like in 2014). However, 2000KB/s=16Mbit/s is still a very low bar, and, moreover, for an adversary that can run relays above the threshold, this change increases the adversarial guard fraction as there are fewer guards above the threshold to compete with.
4. A new guard-selection algorithm was implemented that prevents a denial-of-service attack from forcing a large number of guards (i.e. > 20) from being selected in a short period of time [3]. I believe this merged in 2017. If an adversary can force guard reselection by an attack, you are still extremely vulnerable, though, as a limit of 20 still provides a 20x risk multiple.
[0] https://trac.torproject.org/projects/tor/ticket/12688
[1] https://trac.torproject.org/projects/tor/ticket/8240
[2] https://trac.torproject.org/projects/tor/ticket/12690
[3] https://trac.torproject.org/projects/tor/ticket/19877
tempsalt|6 years ago
ohmygodel|6 years ago
stock_toaster|6 years ago
[1]: https://blog.torproject.org/announcing-vanguards-add-onion-s...
ohmygodel|6 years ago
ohmygodel|6 years ago
bufferoverflow|6 years ago
ohmygodel|6 years ago
edm0nd|6 years ago
A simple national security letter (NSL) without even needing to get a warrant and BOOM you can tap the server and get all info about the person running it.
mirimir|6 years ago
nabnob|6 years ago