Keep in mind that this DPI system, besides making it easier to monitor people's communications and even censor them, would also make it very easy for them to identify the type of traffic that goes through the pipes, so they can know exactly how to charge it differently, which brings us to another one of ITU's proposals, which is to kill net neutrality and charge for "premium services" like watching Youtube, or using other type of P2P traffic (Skype, WebRTC, torrents, etc):
They want to do this, they say, to help "grow the Internet" (hasn't the Internet grown fast enough without their help in the past 20 years?), despite the fact that evidence suggests that the sender-pays system would slow down the growth of the Internet, not make it any faster (research paper):
Note that a kind of sender-pays is already used in practice as all big content providers pay for CDN on a per GB basis, and the CDN company in turn pays for bandwidth.
This is not worthy of "the sky is falling" levels of panic.
My experience with standardization efforts is that they generally run well behind the technology innovators. DPI has been around for a while. A DPI standard (or series of standards) out of the ITU will simply make public the baseline expectations of vendors and users of DPI systems.
On the other hand, CALEA has been on the books for over 15 years, and that is the kind of thing to watch out for -- it does mandate features that provide snooping to the government on demand.
(Speaking as someone who has implemented [shallow] inspection/filtering and CALEA-type features on comms equipment for markets both in and outside of the US.)
Speaking as someone who has implemented [shallow] inspection/filtering and CALEA-type features on comms equipment for markets both in and outside of the US.)
I'm not trying to flame you here, but I really must ask: How do you live with yourself?
I know how trollish that sounds, but I seriously don't understand engineers who voluntarily work against our own ethos. It's not like this is an industry in which implementing CALEA is the only possible way to feed a family. Job opportunities are practically endless.
Can someone explain the problems with the ITU creating specifications? I thought I understood it, but all the recent excitement and anti-ITU sentiment tells me I must be missing something.
How is what the ITU does different from any standards body? They can propose standards for DPI, censoring, etc., but that won't magically make Level3 or Comcast or any particular ISP start playing with my packets.
What am I missing? Where does the stuff the ITU does somehow change the policies and actions of my ISP?
It let's individual governments "pass the buck" of responsibility. When the objection in parliament is brought up of "This seems like a bad idea" the response is "We're just doing what the ITU recommends"
I think it is a good time to start incorporating DJB's NaCl into ... everything. And also run HTTP Everywhere in the meantime. And set up opportunistic IPSEC.
Sad day.
On a related note, I suggest we stop calling the heads of state and bureaucratic organizations like the UN "Leaders" and starting referring to them by their real self appointed role, "Rulers".
Language shapes perception, and we've been using the wrong term for too long.
Isn't this an argument to to continue using the term "leaders"? Surely calling them "rulers" would, under your logic, push them further towards "rulership"?
And of course you can't even read what they approved, because this extra-governmental body inexplicably restricts the text of their decisions to a nebulous list of "TIES users".
Appendix I & II are frighting in their casual use of major headings. Looks an awful lot like the fabled "tier-ed internet".
Heck, one of the diagrams even categorizes IP traffic in 4 levels: Gold, Silver, Bronze, P2P. I'm not an owner of tinfoil hats, but this has a lot of implications to a distributed web.
Appendix Examples:
I.2.1 Differentiated services based on service identification
I.2.2 Traffic monitoring
I.2.4 Traffic statistics and services-based billing
I.3.1 DPI used as a bidirectional tool for service control
I.5 DPI use case: Traffic control
I.5.3 DPI-based policing of peer-to-peer traffic
I.9.2 DPI engine use case: Simple fixed string matching for BitTorrent
II.4.11 Example “Identify uploading BitTorrent users”
II.4.13 Example “Blocking Peer-to-Peer VoIP telephony with proprietary end-to-end application control protocols”
If this gets widespread enough, they'll just inspect traffic when it leaves your VPN gateway/server. VPN is fine for public wifi, or connections between predetermined networks but you can't stretch it much past that.
a recent Chinese paper (its author is the creator of GFW) suggests that the GFW is capable of using SVM to filter SSH tunnel traffic, at the success rate of 95%, without affecting normal SA use.
The hardest thing for me to understand is how every telco can complain of congestion, but they're perfectly willing to introduce unnecessary overhead for DPI.
Monitoring systems are usually extra boxes that you place as either pass-through on the cables, or you tap the signals with attenuators on electric cables, or you bend the fiber enough to tap about 10% light off them, enoufg to regenerate the signal. Other solutions such as just using mirror ports on an ethernet switch exists as well.
They won't need luck, they have the rubber stamps of "National Security" and "We Promise We'll Only Use It For Bad Guys". I imagine that someone somewhere could find a way to apply the interstate commerce clause to let the US gov do what it wants there, too.
They've already been effectively wiretapping and storing a lot, if you believe some of the recent whistleblowers, and so far there's been no effective pushback.
One way to think of this is as a cat and mouse game.
Another way is that some people are developing high-bandwidth, distributed, and anonymous internet software and some other people are writing tests that this software is expected to fail presently but that the developers must pass in order to make it to the next level of hardened security and reliability.
You know we're about to have a bad time when the highly tech savy crowd on HackerNews is left asking "what does this mean?" All over the comments in the thread...
We need to start establishing a DPI/MITM resistant secret sharing protocol, and a related IP protocol.
E.g., a series of simple recaptcha-style tasks that need to be solved within 3 seconds each to be considered secure - in order to avoid active MITM with either machine or manual labor. Once a secret key is established, it can be used from that moment on.
Otherwise, opportunistic encryption can be MITMed and becomes useless.
Less secure: Require committing the equivalent of 10 seconds of a modern i5 processor to establish the shared secret within 20 seconds. That would be acceptable for two side of a connection, but not for a server or a MITM attacker.
Can someone explain what this means from a practical point of view for telecommunications end users? Does it mean that NGN traffic will all undergo deep packet inspection in transit? What exactly are NGNs?
There's a lot of misconceptions on DPI going on here. Without taking position, let me just share a few facts (I work with this...).
- DPI is used in most networks in the world, and is mainly used for throttling P2P or for traffic analysis. Legal interception e.g. classical wiretapping is a different thing, although it could use same hardware.
- DPI does cost some resources to the operator but the impact on end users is negligible (except for malfunctions or improperly dimensioned DPIs). It only adds a tiny amount to latency, which puts an upper limit on bandwidth. In many cases, the bandwidth is limited somewhere else on the link.
- DPI hardware works in several stages, typically: analyze IP flow (shallow inspection), if not enough to decide, to DPI (analyze HTTP headers etc), if not enough to decide, rely on heuristics based on traffic pattern etc.
- DPI is not very good at all at dealing with encrypted traffic, and most DPIs will not be able to do anything else than shallow inspection. (some claim to do traffic flow analysis, and some (normally transparent proxies) can break the HTTPS flow in two, but it would generate client errors).
- ITUs specifications won't have much impact on what ISPs do at the moment (as they already do it) but I guess could be a part of the wider regulatory discussion.
Historically the ITU was a forum through which Ma Bell marketed her switching equipment to national phone companies outside North America. Presumably this is more of the same, except Ma Bell was divested of most equipment manufacturing and it's all IP now so other companies (USA, Japan, Germany, with a few others) are driving. The capacity for spying on and shaping the internet use of customers/subjects is appealing for many decision-makers. Among "first world" nations, we probably shouldn't discount the persuasiveness of the various American TLAs with their various evil projects. Don't you want your national constabulary to buy the same crap the DHS buys?
I suspect that if all of this was for reasonable network maintenance then the ITU wouldn't need to be involved. The equipment manufacturers have the researchers; they don't need the ITU to tell them how to shut down open mail relays.
What makes me less concerned about these proceedings is that all this is happening anyway, whether we're told or not. Yes Virginia, there are bad people in the world, and some of them run phone companies, while others spy on the citizens who pay their salaries. Eventually they'll all be routed around, and we have encryption anyway. Sure key management is hard, but if your opponent controls national network operators then you'll use something better than TLS.
I do find that description a bit odd, since all the early ITU specifications were made mostly by Europeean vendors, and were incompatible with the American versions of protocols, etc.
Anyway - a lot of these specs are made by the very researchers of the various vendors that you talk about - ITU is just a forum they use to collaborate.
If this were some form of physical information transit it would have never passed. While ill be the first to admit I do not know what will come from this, or really even the basic understanding of DPI, this can not be good for those who value there privacy.
I imagine that forcing encryption over all connections would be a counter measure to this? Going about getting all websites to offer encryption might be another story...
Against big governments, nothing is stronger than the little men and women getting together.
If this is so bad (which I am not technically capable of understanding...), what shall we DO about it?
Signing the petition is probably not enough. Get people to have a minute of 'no internet' across the world? Similar to the Anti-SOPA movement? Suggestions welcome.
[+] [-] mtgx|13 years ago|reply
http://itu4u.wordpress.com/2012/10/25/proposal-for-ict-and-i...
They want to do this, they say, to help "grow the Internet" (hasn't the Internet grown fast enough without their help in the past 20 years?), despite the fact that evidence suggests that the sender-pays system would slow down the growth of the Internet, not make it any faster (research paper):
http://mercatus.org/publication/do-high-international-teleco...
[+] [-] ripperdoc|13 years ago|reply
[+] [-] bstpierre|13 years ago|reply
My experience with standardization efforts is that they generally run well behind the technology innovators. DPI has been around for a while. A DPI standard (or series of standards) out of the ITU will simply make public the baseline expectations of vendors and users of DPI systems.
On the other hand, CALEA has been on the books for over 15 years, and that is the kind of thing to watch out for -- it does mandate features that provide snooping to the government on demand.
(Speaking as someone who has implemented [shallow] inspection/filtering and CALEA-type features on comms equipment for markets both in and outside of the US.)
[+] [-] signa11|13 years ago|reply
[+] [-] throwit1979|13 years ago|reply
I'm not trying to flame you here, but I really must ask: How do you live with yourself?
I know how trollish that sounds, but I seriously don't understand engineers who voluntarily work against our own ethos. It's not like this is an industry in which implementing CALEA is the only possible way to feed a family. Job opportunities are practically endless.
[+] [-] MichaelGG|13 years ago|reply
How is what the ITU does different from any standards body? They can propose standards for DPI, censoring, etc., but that won't magically make Level3 or Comcast or any particular ISP start playing with my packets.
What am I missing? Where does the stuff the ITU does somehow change the policies and actions of my ISP?
[+] [-] aidenn0|13 years ago|reply
[+] [-] akozak|13 years ago|reply
Some general information (PDF): https://www.cdt.org/files/file/Global%20Internet%20Governanc...
On the DPI issue: https://www.cdt.org/blogs/cdt/2811adoption-traffic-sniffing-...
[+] [-] beagle3|13 years ago|reply
Sad day.
On a related note, I suggest we stop calling the heads of state and bureaucratic organizations like the UN "Leaders" and starting referring to them by their real self appointed role, "Rulers".
Language shapes perception, and we've been using the wrong term for too long.
[+] [-] saraid216|13 years ago|reply
Isn't this an argument to to continue using the term "leaders"? Surely calling them "rulers" would, under your logic, push them further towards "rulership"?
[+] [-] etherael|13 years ago|reply
[+] [-] gergles|13 years ago|reply
Fucking awful.
[+] [-] mtgx|13 years ago|reply
http://bit.ly/Yx0Sya
[+] [-] liquidise|13 years ago|reply
Heck, one of the diagrams even categorizes IP traffic in 4 levels: Gold, Silver, Bronze, P2P. I'm not an owner of tinfoil hats, but this has a lot of implications to a distributed web.
Appendix Examples: I.2.1 Differentiated services based on service identification
I.2.2 Traffic monitoring
I.2.4 Traffic statistics and services-based billing
I.3.1 DPI used as a bidirectional tool for service control
I.5 DPI use case: Traffic control
I.5.3 DPI-based policing of peer-to-peer traffic
I.9.2 DPI engine use case: Simple fixed string matching for BitTorrent
II.4.11 Example “Identify uploading BitTorrent users”
II.4.13 Example “Blocking Peer-to-Peer VoIP telephony with proprietary end-to-end application control protocols”
[+] [-] mullingitover|13 years ago|reply
[+] [-] spindritf|13 years ago|reply
[+] [-] Jabbles|13 years ago|reply
[+] [-] est|13 years ago|reply
http://www.solidot.org/story?sid=32532
[+] [-] josh2600|13 years ago|reply
This is not a good day, not a good day at all.
[+] [-] sliverstorm|13 years ago|reply
[+] [-] MichaelGG|13 years ago|reply
[+] [-] noselasd|13 years ago|reply
This only costs money, not bandwidth.
[+] [-] revelation|13 years ago|reply
[+] [-] gknoy|13 years ago|reply
They've already been effectively wiretapping and storing a lot, if you believe some of the recent whistleblowers, and so far there's been no effective pushback.
[+] [-] noselasd|13 years ago|reply
Even the ones that say they don't do it, I will bet you can go and find a box somewhere on their network running Snort that does DPI.
[+] [-] georgeorwell|13 years ago|reply
Another way is that some people are developing high-bandwidth, distributed, and anonymous internet software and some other people are writing tests that this software is expected to fail presently but that the developers must pass in order to make it to the next level of hardened security and reliability.
[+] [-] bdg|13 years ago|reply
[+] [-] beagle3|13 years ago|reply
E.g., a series of simple recaptcha-style tasks that need to be solved within 3 seconds each to be considered secure - in order to avoid active MITM with either machine or manual labor. Once a secret key is established, it can be used from that moment on.
Otherwise, opportunistic encryption can be MITMed and becomes useless.
Less secure: Require committing the equivalent of 10 seconds of a modern i5 processor to establish the shared secret within 20 seconds. That would be acceptable for two side of a connection, but not for a server or a MITM attacker.
[+] [-] mtgx|13 years ago|reply
http://broabandtrafficmanagement.blogspot.com/2012/12/itu-ap...
[+] [-] jstr|13 years ago|reply
[+] [-] ripperdoc|13 years ago|reply
- DPI is used in most networks in the world, and is mainly used for throttling P2P or for traffic analysis. Legal interception e.g. classical wiretapping is a different thing, although it could use same hardware.
- DPI does cost some resources to the operator but the impact on end users is negligible (except for malfunctions or improperly dimensioned DPIs). It only adds a tiny amount to latency, which puts an upper limit on bandwidth. In many cases, the bandwidth is limited somewhere else on the link.
- DPI hardware works in several stages, typically: analyze IP flow (shallow inspection), if not enough to decide, to DPI (analyze HTTP headers etc), if not enough to decide, rely on heuristics based on traffic pattern etc.
- DPI is not very good at all at dealing with encrypted traffic, and most DPIs will not be able to do anything else than shallow inspection. (some claim to do traffic flow analysis, and some (normally transparent proxies) can break the HTTPS flow in two, but it would generate client errors).
- ITUs specifications won't have much impact on what ISPs do at the moment (as they already do it) but I guess could be a part of the wider regulatory discussion.
[+] [-] jessaustin|13 years ago|reply
I suspect that if all of this was for reasonable network maintenance then the ITU wouldn't need to be involved. The equipment manufacturers have the researchers; they don't need the ITU to tell them how to shut down open mail relays.
What makes me less concerned about these proceedings is that all this is happening anyway, whether we're told or not. Yes Virginia, there are bad people in the world, and some of them run phone companies, while others spy on the citizens who pay their salaries. Eventually they'll all be routed around, and we have encryption anyway. Sure key management is hard, but if your opponent controls national network operators then you'll use something better than TLS.
[+] [-] noselasd|13 years ago|reply
Anyway - a lot of these specs are made by the very researchers of the various vendors that you talk about - ITU is just a forum they use to collaborate.
[+] [-] jug6ernaut|13 years ago|reply
I imagine that forcing encryption over all connections would be a counter measure to this? Going about getting all websites to offer encryption might be another story...
[+] [-] jayfuerstenberg|13 years ago|reply
Governments who are fighting to keep these negotiations as 'closed' as possible want our communications to be as 'open' as possible?
War = Peace
Freedom = Slavery
...
[+] [-] Jimega36|13 years ago|reply
If this is so bad (which I am not technically capable of understanding...), what shall we DO about it?
Signing the petition is probably not enough. Get people to have a minute of 'no internet' across the world? Similar to the Anti-SOPA movement? Suggestions welcome.
[+] [-] bluedanieru|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]