top | item 12840903

The death of transit?

89 points| colinscape | 9 years ago |blog.apnic.net | reply

19 comments

order
[+] dsr_|9 years ago|reply
Well, no, transit isn't dead. But when your traffic volume rises to one of the top N -- let's say, N approximates 10 -- sources/destinations of the entire Internet, you discover it's cheaper to run your own global networks.

And that's what Google, Facebook, and Amazon, at the very least, have done: bought fiber, hired network engineers, and designed things that work efficiently for them. If YouTube is 90% of Google's traffic, it's not surprising that Google's network looks like a CDN. Amazon wants to interconnect their AWS datacenters to lower their internal traffic costs. Facebook wrote a new routing protocol (Open/R).

[+] exelius|9 years ago|reply
CDNs are simply good architecture -- in any good system, you have multiple tiers of storage, and web systems are no different. Multiple tiers of storage providing caching over intermediate links which may be saturated -- this is a pretty common model for logistics whether you're talking data or packages.

Also, all of the companies you list purchase interconnects or CDN services from large ISPs. So Amazon has a datacenter in Chicago that has a direct fiber connection to Comcast and Verizon networks, for example, that hosts a copy of its CDN endpoints. CDNs are ridiculously easy to build these days; I helped design the build-out of a CDN for a major ISP and we just used off-the-shelf open source software. The hardest part of the project was getting the purchase orders through my client's procurement process. In my mind, that means the engineering here is so uninteresting as to be commoditized -- which means that this is a business problem, not a technical one.

So transit is disappearing, but direct interconnects to big ISPs are just taking their place. On one hand, it's hard to argue against -- it's the right technical solution and there isn't a better option. But at the same time, it concentrates control to a worrying degree, especially as media, telecom and software continue to converge.

[+] nowprovision|9 years ago|reply
Not sure this is the case, amz data centres to other amz data centres in most cases you go across ntt, Tata etc. Google on the other hand is different e.g. Taiwan to Ireland all google network. Spin up vms and traceroute
[+] convolvatron|9 years ago|reply
Scattering caches or CDN nodes around the internet has obvious value.

Its a pretty big jump though from caches to a world where the internet is structured like a cable tv service with a architecturally designated 'head end'.

Many internet architects seem to get very excited about losing end-to-end connectivity, and I can never figure out why. I guess it allows one to raise larger barriers to entry(?).

[+] jlgaddis|9 years ago|reply
I'm one of those who values end-to-end connectivity, but I can remember when it was the rule instead of the exception.

There was a time -- before the rise of NAT -- when one could directly establish connections to others across the Internet without having to jump through hoops or implement other tools (port forwarding, UPNP, a third-party(!), etc.) to do it.

In general, as a network engineer, I dislike anything (especially NAT) that breaks end-to-end connectivity, simply because of the inherent problems that arise as a result.

In addition, some of the DDoS attacks we've seen recently would be a lot easier to prevent if NAT wasn't a thing (e.g., as an ISP, I could easily shut off a specific device; I can't just shut off a customer's entire access).

[+] omegaworks|9 years ago|reply
Fascinating. The growth of architecture containerization will help facilitate this transition away from high latency centrally-located services. At a high level, this means users on a particular continent will see their continent's 'shard' of data much more quickly than off-continent shards. I wonder if we'll start to see prioritization of locally-available data in algorithmic content aggregators (Facebook, Reddit, G+) because of this.

Also wonder if different regions will impose restrictions on building these duplication services in hopes of promoting growth of their own content-producing industries. I mean, China is already doing this with their firewall. Maybe we'll see the WTO grow to prevent this kind of manipulation.

With the proliferation of PAAS providers (Firebase, for example) I wonder if there will be transparent ways to proactively structure access to data to prioritize latency.