top | item 8271576

UCLA, Cisco and more join forces to replace TCP/IP

98 points| ossama | 11 years ago |networkworld.com | reply

84 comments

order
[+] erik123|11 years ago|reply
In NDN, all data is signed by data producers and verified by the consumers, and the data name provides essential context for security.

Centralizing the concept of security in the network's architecture will create an intractable problem. Certain parties will still want to impose their desire to be able to eavesdrop on the data. Therefore, there cannot be any real security in such centralized design for security.

The in tempore non suspecto in which it was still possible to roll out security jokes such as SSL, is over now. Nowadays, 95% of the world population (and their governments) will refuse to adopt any centralized security design, because they do not trust it.

In my impression, the project is dead on arrival.

[+] jrapdx3|11 years ago|reply
I'm wondering how the proposed security model differs from current practices. What you've quoted doesn't seem different. After all, on the web today, data producers send a certificate which the client "verifies". At least the long list of "CA" names in my browser purports to be "verified" by an "authority".

Maybe the point you're making is exemplified by the news item concerning a security breach that was detected after going on for 13 years. (http://cybertinel.com/wp-content/uploads/2014/09/HARKONNEN-O...)

The part that got my attention was the criminals having fooled users into revealing passwords using certificates purchased from CAs (at a total cost of $150,000).

That seems to mean the current CA system is broken. It's not a big surprise that a centralized security concept is in NDN--Verisign is one of its main supporters.

[+] SudoNick|11 years ago|reply
I think this is the first time I've heard of NDN, and I've only spent a short while reading about it. My first impression is:

1) It involves addressing chunks of data rather than hosts that hold those chunks. Somewhat like using URLs at the network level. So instead of IP/HTTPS where the network learns of host communications without learning of specific data exchanges, NPN would reveal those specific data exchanges to the network.

2) The mandatory "signature, coupled with data publisher information, enables determination of data provenance..." aspect would cut both ways. For we would be data consumers in some contexts and data producers in other contexts.

I hope I read something to dispel my concern, but I worry about increased metadata exposures and reduced ability to achieve beneficial levels of privacy/anonymity.

[+] jasonwatkinspdx|11 years ago|reply
As usual, people ranting on HN without bothering to read what they're ranting about.

You're assuming centralization. All that's in the specs are slots for a locator and signature value. It's left up to the application to define trust semantics be it PKI, WoT or whatever.

http://named-data.net/doc/ndn-tlv/signature.html

[+] reilly3000|11 years ago|reply
That is why I'm of the mind that both canonical identity and permission need to be managed in a globally distributed fashion with blockchain. In order to facilitate the level of data transfer that would need to happen between devices to accommodate the distribution of global scale permission data, networking does need to change fundamentally.

I envision my fridge having a permission entry that neighbor can use my car tomorrow, and also that a person I will never meet has purchased a ticket for a flight to Argentina next week. That data is constantly being shared on a mesh network with my car and everybody else's I drive down the road next to, across all of my devices and those of every other participant. No government or corporation should be able to have the whole set of data, and none of it should be able to have a very long half-life.

The only way we can move forward to a truly connected version of the future is with trust, and the only way to have a truly trusted security model is to have it be globally distributed. NDN may or may not be the next version of networking to support it, but I'm rather confident that TCP/IP isn't going to be the way we get there.

[+] takeda|11 years ago|reply
No, the protocol does not impose central authority for data signing, it is up to the application to do this.
[+] jimmaswell|11 years ago|reply
In what way is SSL a "security joke"?
[+] nnain|11 years ago|reply
One of the earliest attempts to replace the the TCP/IP model (or rather the lower layers of the ISO-OSI model) was the Asynchronous Transfer Mode (ATM). Despite being a well-intentioned idea, it failed to see real world usage because of the complexity.

Along the way many developments happened. People learned to live and work with IPv4. Even IPv6 hasn't picked up despite solving some important problems. So when it comes to updating the core networking infrastructure, I don't think TCP/IP is replaceable. It just works very well now -- you can have real time chats, high throughput data lines, has time-tested code libraries, there's vast amounts of knowledge so you can build apps fast and all that.

As I understand, what this 'Named Data Networking' technology is proposing is to replace IP addressing scheme with Names. I'm not sure if the whole internet backbone infrastructure would change it's networking strategy now.

TCP/IP addressing format is very structured and that's its strength. IMHO that's actually how communication should take place; not with names that can have high-variation in format.

[+] ay|11 years ago|reply
"Even IPv6 hasn't picked up" - this needs correction.

http://6lab.cisco.com/stats/

https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

http://www.worldipv6launch.org/measurements/

You can notice that 9% of the internet users in the US are IPv6-enabled. Germany is over 11%. Belgium is almost 30% (of course due to smaller population it's less in absolute host count).

How many IPv6 users this is in millions, is an exercise left for the reader.

The things are moving very very fast - lots of large SPs have bumped the values within this year from low-mid single digits to nontrivial double-digits, and lots more are in the pipe.

All major CDNs support it, helping IPv6-enable thousands of sites that don't run IPv4 on the server itself. I'm saddened by the fact that HN site, being Cloudflare customer, did not flip the switch - there's really zero excuses today. (http://blog.cloudflare.com/eliminating-the-last-reasons-to-n...)

(On a side note, there are today millions of users not worrying to have any IPv4 at all - on T-Mobile's network. See: https://conference.apnic.net/data/37/464xlat-apricot-2014_13...)

Here's another data point, from my home gateway (I'm in the remaining 70% of folks in Belgium who yet don't have IPv6 so I am using Hurricane Electric tunnel - and the Vlan50 is the IPv4-only internet connection, so that counter shows IPv4 user traffic + IPv4 tunnel traffic - so you can count it as "aggregate").

  ay-home#sh int Tunnel0 | inc packets|escr
    Description: Hurricane Electric -- Paris
    5 minute input rate 5000 bits/sec, 6 packets/sec
    5 minute output rate 5000 bits/sec, 3 packets/sec
       171400464 packets input, 193001468663 bytes, 0 no buffer
       90187695 packets output, 13837665814 bytes, 0 underruns
  ay-home#sh int Vlan50 | inc packets|escr 
    Description: Outside - internet-facing
    5 minute input rate 143000 bits/sec, 24 packets/sec
    5 minute output rate 34000 bits/sec, 25 packets/sec
       618491041 packets input, 607147678054 bytes, 38 no buffer
       390716032 packets output, 83476555174 bytes, 0 underruns
  ay-home#
Do your math.
[+] lisper|11 years ago|reply
My first startup was an attempt to make an ATM-like network that actually worked:

http://www.linuxjournal.com/article/3293

Although mostly of historical interest now, I think the basic ideas are still sound. But the problem (as we learned the hard way) is that deploying a new network architecture is mainly a political problem, not a technical one.

[+] PinguTS|11 years ago|reply
That is not right, that ATM failed to see real world usage.

ATM was the backbone of the German infrastructure run mainly by German Telekom for years. It provided a very good service especially for telephony (ISDN). Germany basically had the best telephony network in the world.

But the problem is, that IP does not fit with the 55ms time slot in ATM. That is the why all the backbones are replaced with the so-called Next Generation Networks (NGN), which basically is pure IP traffic and everything will be on top of IP, not anymore in parallel to IP. That basically means, moving to VoIP in the backbone and consequently also for the consumer end.

[+] walshemj|11 years ago|reply
yep if having a better networking stack was going to work we would be using OSI, X.400 and X.500 now and not TCP/IP

Still id have probably still be working for a Telco and have a really cool mail address though cn=uk cn="maurice"

[+] fleitz|11 years ago|reply
I wonder if it's just my naiveté however it sounds like this is more likely to produce an X400 than an SMTP.

The vision seems pretty grand and all encompassing wholesale replacement of the entire networking stack, rather than small and easy to implement iterative approach. It seems that the biggest thing the TCP/IP folks got 'wrong' was the 32 bit address space, and even that small change is taking forever to be deployed.

Yes you could certainly improve TCP/IP but is it going to be 10X better?

[+] tgflynn|11 years ago|reply
Yes you could certainly improve TCP/IP but is it going to be 10X better?

Doubtful, and I think you'd really need to see that level of improvement to have any hope of replacing TCP/IP.

[+] vidarh|11 years ago|reply
If the "CDN" part of it proves to be sufficiently useful, this could be deploy layered on top of IP, or wrapped in UDP or even a TCP connection. Capable clients would then "just" need a means of discovering the nearest capable router that'll let it tunnel. And while IPv6 also can easily be tunnelled, the benefits of doing so are much smaller: IPv6 doesn't give you that much if your host still has an IPv4 address too.

But if this system lets your ISP drop in a new router or two that suddenly can know just by looking at packet headers that it is allowed to returned data from a local cache instead of passing the data on to the server and waiting for a response, then it could have sufficient benefits as soon as a couple of large bandwidth hogs starts supporting it. E.g. if Netflix or Youtube made use of it

That potentially a pretty different proposition.

Then again, the question is whether they need to re-architect the lower level protocols to do this, instead of defining a protocol on top of TCP or UDP that services that are actually likely to benefit can implement.

[+] signa11|11 years ago|reply
> It seems that the biggest thing the TCP/IP folks got 'wrong' was the 32 bit address space, and even that small change is taking forever to be deployed.

i guess you are alluding to ipv6 here. and imho, ipv6 provides quite a large number of changes from vanilla ipv4. it is not just a much larger address space...

[+] drvdevd|11 years ago|reply
The "forever to be deployed" part is a crucial observation. Perhaps research into how to get the Internet community to adopt new protocols is more relevant than the protocols themselves. In other words: how can we speed up the adoption of IPv6?
[+] mmaunder|11 years ago|reply
But it's a great way to sell more stuff.
[+] walshemj|11 years ago|reply
Scraping Ipv6 and starting again would be a better solution and this was trivially obvious back in the mid 90's that ip/v6 was a POS
[+] jrapdx3|11 years ago|reply
Maybe it's just my nature to be guarded about grand visions, but does this idea really have a good chance of succeeding? Will it displace TCP/IP given the extent of IP deployment around the world?

No doubt there are people here who are network experts who can give a more learned review than I can after quickly reading the overview on the website.

I have a lot to learn about the subject...

[+] tgflynn|11 years ago|reply
Though I'm hardly a networking expert I did some contract work implementing NDN simulations so I probably know a bit more about it than most people.

I'm highly skeptical that data entities provide a better (or even adequate) base abstraction for networking than network addresses associated with physical machines (ie. hosts, routers, etc.).

It seems to me that the problems NDN wants to solve, primarily content caching, would be better addressed at a higher abstraction level, as I think CDN's already do. This is my opinion, and I'm certainly open to being proved wrong by more qualified viewpoints.

As for the likelihood of NDN ever (a word I would almost never use, especially in regards to technology) replacing TCP/IP, it seems hard to believe given the extreme slowness with which IPv6, a comparatively minor change, is being adopted.

[+] takeda|11 years ago|reply
The NDN was designed based on today's most common Internet use cases. If you think about it, most of the time we are requesting content from specific place, but we don't really care where the server is located, what address it has etc, all we care about is the content and whether it comes from intended (trusted) source.

Assuming that the same name always references the same data, gives an edges, because now routers are aware of the data so they now have ability to cache the content locally and when someone else requests the same thing they can just forward what they have without having to ask uplink about it.

It gives an edge in certain use cases. Probably the biggest ones would be YouTube, Netflix etc. There is a lot of effort on TCP/IP network to provide great experience for the user, through CDN, any cast routing, and other tricks, with NDN you already have network that is very friendly and makes CDN unnecessary as long as you design your protocol in such way that you utilize network's properties. Another nice advantage is on lossy networks like wireless ones. For example when you requesting content which goes through many hops if there if the response was dropped, thanks to caching it can be resend from the same point it was dropped without having to go back to the source. This might also help in such network when the consumer is on the move. NDN also has some nice properties, if for example certain name is set up in such way that can be shared by multiple parties, then it is possible to implement a chat without need of any server, which is quite cool.

Given these benefits the NDN is a two edged sword though, while it makes content publishing to many people simple it makes certain tasks harder. For example implementing something like ssh over it might be a bit difficult. In fact anything that benefits from pushing data/request (simple example from one of the project - controlling lighting infrastructure) will be complex. It is still possible to implement but it is harder to do than in TCP/IP.

As for adaptation, it is hard to say. It definitively won't be easy. The protocol is not a drop in replacement for TCP/IP everything needs to be reinvented again. You can possibly convert existing applications to work with it, and in fact it should be possible to carry TCP/IP over NDN but then you're losing all of the nice properties of the protocol. Some things would work better, for example stripping TCP/IP and having HTTP protocol implemented on top of NDN. Some people already created NDN<->HTTP gateway.

On the other hand it could be extremely beneficial with specific use cases we are somewhat struggling with, like multicasting of video. One strong point of it is that the protocol can be implemented on top of TCP/IP, and in fact that's how NDN testbed is (or at least was when I was there) implemented currently. The adaptation goal is to have a network built on top of TCP/IP and as it grows and is big enough eventually the TCP/IP layer below will collapse and NDN will take its place. That's of course assuming the NDN will handle all of our needs that will make TCP/IP unnecessary, otherwise it'll be just an overlay network. They also try to avoid other mistakes of IPv6 and concentrating on making it attractive not just technically but also from a business perspective. That's why they are also partnering with vendors.

Source: I actually was involved in NDN between 2010 and 2012. And know people mentioned in the article in person. One of my projects was video streaming over NDN.

[+] fla|11 years ago|reply
I am no network expert, but I guess their idea is to optimize the traffic in-between endpoints. The last segments (you <-> ISP) would still be using TCP over IP.
[+] kv85s|11 years ago|reply
I believe what they're proposing is largely the same, if not identical, to Content-Centric Networking from Xerox PARC.

The central idea is:

  Instead of asking one particular server for some content, just ask for the content by name.
Since the content may come from any handy server, it is up to the receiver to validate it is really the content he requested. Nothing about this implies the evil "centralized security model" people are going on about. Sure, some bad actor could weasel it in later, but it's not there now.
[+] takeda|11 years ago|reply
Yes, the CCNx was the first implementation of NDN and the NDNx they currently using is fork of it.

AFAIK Parc is still receiving some part of the NSF funding to continue working on CCNx.

[+] radicalbyte|11 years ago|reply
So if I've understood this correctly, NDN works be giving each piece of content a unique address, instead of stopping at the host?

Basically baking a URI into the low-level protocols?

[+] tgflynn|11 years ago|reply
Yes, and the contents can be cached by the routers, so to get a piece of a video you don't need a connection all the way to the source of that video but only to the nearest router that caches it.

That may make sense for content that has few sources and many users, like video (although I think CDN's mostly already solve this problem).

I don't think it makes much sense for interactive data and hence I don't think it's a good basis for implementing all networking protocols.

[+] Mawaai|11 years ago|reply
People are trying for almost 15 years to replace IPv4. That's almost impossible, 96% of the traffic worldwide is still IPv4.

This project is dead on arrival.

[+] wernercd|11 years ago|reply
read other poster(s) above... IPv6 is on the uptick, especially since IPv4 is more or less out of space.
[+] Luker88|11 years ago|reply
This is the old Content Distribution Network. It does work -- provided you can easily identify a resource in the network. URIs are hierarchal, but do not follow the network connections hierarchy. Also, now every router needs to be able to track all the streams that go through it.

In short, everything explodes when you try to scale the thing.

[+] shmerl|11 years ago|reply
I hope keeping it all patent free / disarmament patent style is a requirement for participation.
[+] bediger4000|11 years ago|reply
I note that patents/"Intellectual Property" wasn't mentioned in the article at all. I suspect, based on the participants mostly being corporations, that the whole thing will be covered by patents.

I think TCP/IP as non-patented, slipped by the major corporations. A protocol anyone can implement, and where the "client" and "server" are pretty hard to tell apart, is disadvantageous to market encumbents, and to surveillance agencies. For instance, nobody can charge fees for implementing TCP/IP. Nobody can license content servers. Nobody can accurately attribute a packet to a legally-responsible entity ("one neck to wring").

The protocol to replace TCP/IP will be patent encumbered, it will make a complete distinction between "client" and "server", it will be centrally routed, it will be subject to surveillance, and servers will be licensed, and costly. If NDN doesn't do some or all of these things, it's already dead.

[+] dredmorbius|11 years ago|reply
I think you'd have to go beyond patent-free and require a mutual defense pact. Might even get DoJ to sign off on that.
[+] colanderman|11 years ago|reply
Someone who knows more than me; does this intend to complement TCP, or replace TCP? If the latter, how would one use NDN to implement a system that naturally fits the "conversation" model of TCP, e.g. an MMORPG?
[+] legomylibrum|11 years ago|reply
I don't think you would use TCP for an MMORPG; UDP is more common in games because a dropped frame here and there doesn't matter to most games, and it's worth the lower overhead.
[+] islon|11 years ago|reply
What could possibly go wrong? It's not like the whole internet as we know it depends at some level of tcp/ip and there's (probably) billions of lines of code depending on it.
[+] scrame|11 years ago|reply
Yeah, good luck with that. I'm more surprised they didn't say they would fix it with a MongoDB backed Facebook app written in node.js.
[+] jdimov|11 years ago|reply
Umm.. how about let's NOT replace TCP/IP with anything because it's may be the only well-designed thing on the Internet that actually works? If you want an impossible super-hero project to work on, try replacing HTTP instead - at least you'd actually be solving a problem.
[+] takeda|11 years ago|reply
The people who are involved in NDN had also huge part in making the current TCP/IP work.

For example Van Jacobson, whom started the idea and made huge contributions, one of them was implementing congestion control in TCP/IP. Some people don't know but in early 90s the Internet actually collapsed under the traffic and was practically unusable until his fix.

Lixia Zhang for example was working on TCP/IP since 1981 she was responsible for Resource ReSerVation Protocol (RSVP), which is implemented by almost every major router vendor today for Internet resource management and traffic control applications.

[+] antocv|11 years ago|reply
Whats wrong with HTTP? How would you improve it?

Its a Text Transfer Protocol, you can even build applications with text only clients and servers. One only needs echo, bash and netcat to make a server and client.

[+] SixSigma|11 years ago|reply
> more secure

If something is already secure, how can you make it moreso ?

What they mean is "less insecure".

[+] deciplex|11 years ago|reply
Something can be secure enough that the computational power required to break it is probably not available to various actors up to and perhaps including nation-states (e.g. RSA), yet still well short of the "requires more energy to compute than is available in the visible universe" benchmark (e.g. AES, probably). Yet both could still be regarded as "secure".
[+] ivoras|11 years ago|reply
Nothing except efficiency is preventing us from using names as parts of network/subnet hierarchy instead of numbers, e.g. : steve.home.town.country instead of 192.168.5.6 (or the same thing on IPv6), and even efficiency could be improved by the smart use of hashing... BUT! The major problem I see here is that there simply are more numbers than words.

In practice, especially at large companies, it will certainly without a doubt degrade into workstation001, workstation002... workstation999 and then we're in effect back where we started from - using numbers.

This looks like a solution in search of a problem.

[+] tgflynn|11 years ago|reply
That's not what NDN is about.

NDN assigns names (or addresses) to data contents, not physical machines/interfaces like IP does. So it's conceptually quite different from the way IP routing works.

The issue you mention is already solved by DNS.

This looks like a solution in search of a problem.

NDN attempts to make content distribution more efficient through caching. Whether solving that problem justifies rewriting the entire network stack is highly questionable.