top | item 10877799

Two months after FBI debacle, Tor Project still can’t get an answer from CMU

285 points| pavornyoh | 10 years ago |arstechnica.com | reply

45 comments

order
[+] rtpg|10 years ago|reply
>... a few weeks earlier had canceled a security conference presentation on a low-cost way to deanonymize Tor users. The Tor officials went on to warn that an intelligence agency from a global adversary also might have been able to capitalize on the vulnerability.

This is kind of worrying. I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)

As to the CMU stuff... Tokyo University has this pledge to make sure basically no military research is done on campus, which I feel to be pretty laudable.

I wonder if there's a similarly worded pledge for this sort of thing. But at the same time, universities can do a lot of good security research that can, in the end, strengthen the systems we use.

The "$1 million to target these specific people" sounds dirty, but "$1 million to do research on the vulnerabilities of Tor"... well that sounds like research to me. Pretty tricky.

[+] cornholio|10 years ago|reply
There's is no WPA2 alternative, this is it, this is the bleeding edge of Internet privacy algorithms. And since privacy is seen as a public enemy, a public sponsored attack is underway to weaken it, to the point where you can't really trust Tor for the type of world changing, nation state adversary, Snowden or Wikileaks level missions.

People needing a high level of protection can use and should use Tor in their workflow but they should not expect a one-click solution. On the other hand, it's perfectly adequate for day to day use of privacy minded individuals that are not targeted by active attacks.

[+] lwf|10 years ago|reply
> Tokyo University has this pledge to make sure basically no military research is done on campus, which I feel to be pretty laudable.

So, you move it off-campus. See e.g. the MIT Lincoln Lab, https://www.ll.mit.edu/

[+] fweespeech|10 years ago|reply
> I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)

The attacks on Tor are largely in the form of:

A) Outright implementation flaws [e.g. Software bugs ]

B) Malicious actors deploying Tor nodes [e.g. On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks. https://blog.torproject.org/blog/tor-security-advisory-relay... ]

> A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:

Pretty much the only defense is to control the entry nodes you use yourself by:

https://www.torproject.org/docs/faq.html.en#EntryGuards

> Restricting your entry nodes may also help against attackers who want to run a few Tor nodes and easily enumerate all of the Tor user IP addresses. (Even though they can't learn what destinations the users are talking to, they still might be able to do bad things with just a list of users.) However, that feature won't really become useful until we move to a "directory guard" design as well.

Its an inherent problem with a low latency anonymity network that is really an open research problem.

However, controlling your entry nodes has a different problem:

1) It pretty clearly links you to you entering the tor network via a consistent sent of nodes.

2) Capturing these nodes via the DC and warrants/legal action has been done in the past. As any is going to be able to find these nodes since they are no longer randomly selected...

3) Once you are actively targeted you are just as vulnerable.

[+] sandworm101|10 years ago|reply
The intelligence community used to value Tor. Remember where it came from. Now they don't, presumably because the primary intelligence target has shifted from fixed actors like nation states and large businesses to the general public. Now those nation states and businesses are 'intelligence partners' in the fight against the 'lone wolfs' hiding within the masses. Perhaps then it is in Tor's interests to restart some rivalry between nation states.
[+] rdtsc|10 years ago|reply
NSA is schizophrenic in that regard. Remember that one of the things it does besides looking in everyone's underwear drawers is it also advises US govt (3 letter agencies, military) on what crypto to use. In other words it tells Uncle Sam how to lock his underwear drawers so other agencies don't peek in there.

It is always interesting to see what they say there. Because if they know, for example, one type of crypto technique or implementation is vulnerable will they still recommend it for TS classified material storage? Will they recommend for US military or diplomatic service? If they don't, it might leave that open to attack, and they are not doing their job. If they do say "don't use this combination of AES, prime numbers, or OpenSSL implementations", that also gives something away.

I wonder if people people who make these recommendations even talk to people who discover, exploit, and actively penetrate systems? Because everything is very compartmentalized, they actually might not be able to.

That is why they are probably very interested (like we saw) in somehow subverting or weakening some algorithms and implementation so they are the only ones that have a key (Dual_EC_DRBG) , or they are the only ones that potentially have a computational capacity to exploit (DES).

[+] eliteraspberrie|10 years ago|reply
I don't think they valued Tor specifically. They did value scientific research, which is what Tor was at the time. Like most research work, it got dropped once they had a working proof of concept. The State Department picked it up years later.
[+] hackuser|10 years ago|reply
I worry about Tor's security:

1) For security, most systems rely on their obscurity and on the fact that the assets they protect probably aren't worth much investment by the attackers. Tor can't rely on either of those circumstances: It's prominent and breaking into it is a one-stop solution to attacking many valuable targets.

2) Many organizations with large amounts of resources, from state intelligence agencies to law enforcement to security vendors to ISPs, would like to find solutions to hacking Tor security inexpensively.

3) True security is very difficult and expensive. For Tor, this is taken to an extreme by #2. Does the Tor Project have the resources to implement bug-free software (e.g., the kind that flies passsenger planes)? Certainly not. Can they find and fix bugs as quickly as the attackers described above find and exploit them? Certainly not. I'm not criticizing them; they just don't have the resources.

4) Assuming the underlying concept of onion routing is secure, there still are plenty of targets for attacks such as implementation and all the other code Tor relies on (e.g., almost all of Firefox for the Tor Browser, encryption algorithms, your OS, etc.). Attacking a Tor user doesn't seem impossible.

Based only on the theorizing above, and not knowing about Tor's actual implemenation, I fear that we're lucky if Tor still is expensive to attack. Of course, any smart attacker with an exploit will publicly complain how hard Tor is to hack.

[+] baby|10 years ago|reply
If you look at Tor's concept. It's pretty clear that it cannot be considered secure.

Each time you use tor your packets actually go through a path of 3 different servers (or relays). If the attacker owns the two ends it's game over. How many relays are there out there? How many are owned by the NSA or other gov?

It's pretty obvious that this system just cannot work because a majority of relays are owned by the attacker.

[+] snsr|10 years ago|reply
This continues to reflect very poorly on CMU and CERT.
[+] greggarious|10 years ago|reply
Yes, but if they're under some Kafkaesque gag order there not much they can do right?
[+] enginn|10 years ago|reply
From what I've gathered, TOR is pretty robust at least on paper, and when explained in an academic way it has me almost convinced that the apparatus does what its supposed to, except for the part where it catastrophically fails when put into practice, like when:

1.) Custom Firefox 'Browser Bundles' which do not auto-update and ensure latent vulnerabilities are left un-addressed

2.) Trusted 'Third Parties' running exit nodes who we hope and pray are doing their job correctly

3.) Weird and non-innocuous looking domains on the wire that do nothing more than alert the neighborhood that somebody's using TOR (Unless everyone's using it you stand out like a sore thumb)

4.) Sybil attacks in the form of people-with-more-money-than-you polluting the network

5.) ???

6.) Any number of other issues (which have since been patched in the past), but still work if the TOR user is uneducated about how TOR works (traffic analysis / correlation attacks / zero-knowledge-proof attacks, etc)

[+] hiq|10 years ago|reply
> Personally, I use it maybe 10, 20 percent of the time. I know that there are people out there that are using it a lot of the time. But for me as much as I might hate Flash, there are times that I need to watch something on YouTube.

YouTube has been working for me using Tor Browser for months, if not years.

[+] codingdave|10 years ago|reply
That was an analogy, not a bug report.
[+] vaadu|10 years ago|reply
In what way is this an FBI debacle? The F35 is a debacle as is the TSA. But the program the FBI was running to deanonymize TOR users? Not even close.
[+] edgarallanbro|10 years ago|reply
It's two months after the FBI debacle and people still don't know the difference between CMU and CERT.
[+] p4wnc6|10 years ago|reply
I don't think universities should get a free pass on whatever their affiliated FFRDCs might do. If the university wants to be disassociated from an unethical action, do so by severing the tie between the university and the FFRDC, and stop lending credibility and credential to the FFRDC via the university's reputation. Otherwise, accept the fair guilt by association that will follow.
[+] marshray|10 years ago|reply
Literally the first words on cert.org are

    CMU[http://www.cmu.edu/] SEI[http://www.sei.cmu.edu/] CERT Division

    [CERT logo] [SEI logo] [CMU Logo]
The blog posts posted on cert.org all link to cmu.edu.

The first words on www.sei.cmu.edu are

    CMU SEI CERT Division

    [SEI logo] [CMU Logo]

CMU is clearly permitting CERT to use and promote its logo, in fact, it's almost the exact same webpage.

CMU is clearly endorsing the actions of CERT.

[+] archimedespi|10 years ago|reply
I'm waiting for someone to build an implementation of Tor in a proof-verifiable language.

That would be pretty cool, since anyone could prove source correctness automatically.

[+] dguido|10 years ago|reply
That would help with things like the memory safety of the daemons you run, but that hasn't been the problem when Tor has failed its users.

Tor has failed its users because the idea of running a public Tor cloud with volunteer entry, onion, and exit nodes is ludicrous. It means that the entire network is under surveillance all the time, the exact opposite of what you want. There has been widespread confirmation that the data you transfer via the public Tor cloud is being passively surveilled at the endpoints and actively modified when you, for example, download software. This makes it incredibly dangerous to use, likely more dangerous than just using the regular internet.

There are many other problems (like the fact that .onion sites are a dirty hack and likely have many undiscovered weaknesses like the ones CMU found) but nearly all of them are either deployment or architectural issues, not code security issues.

[+] Lanzaa|10 years ago|reply
I think most proof-verifiable languages are too limited to prove many types of security correctness valuable to tor users.

For example, side channel attacks. A classic attack on computerized cryptography. I don't know of any proof language that can protect against side channel attacks.

If you look online there are a few lists of tor attacks. The attacks include: snooping on exit relays, application issues, traffic correlation, website fingerprinting, congestion attacks, blocking tor access (declining to extend). Most of these are issues in the design of the tor system, not something I think source code proof systems are capable of preventing.

[+] munin|10 years ago|reply
what property would you prove, though? you could create a memory safe program that does not provide anonymity. how do you represent "anonymity" in the proof system?
[+] enginn|10 years ago|reply
Solve the problem first, and then write the code.