Valve recently announced partner access to their network backbone which addresses a lot of the problems mentioned here:
* Access to our network, giving your players protection from attack, 100% reliable NAT traversal, and improved connectivity.
* Tools for instantly estimating the ping between two arbitrary hosts without sending any packets.
* A high quality end-to-end encrypted reliable-over-UDP protocol.
>> Second, clients can select a route that gets off of the public Internet and onto our dedicated links as early as possible. On our backbone we can ensure that the routing is optimal, since we have peered with over 2,500 ISPs. We also prioritize the latency-sensitive game traffic over HTTP content downloads, which we can afford to do because game traffic makes up a relatively small percentage of our overall bandwidth utilization. And on our backbone, a sudden surge of traffic unrelated to gaming won’t degrade the experience.
Which is great if Steam is your only platform and you want to be tied to Valve forever, but as soon as you want your game on mobile, console or even another PC store like Epic or GOG you need another solution.
The Internet is optimized for throughput at lowest cost.
No amount of good netcode that you write can compensate for this.
The problem is the internet itself.
The internet doesn’t care about your game.
The Internet did change, such that streaming movies became viable. Given that games are now bigger than movies, could it be argued that that the network lag we're seeing is the industry lagging behind the culture? (Which it generally does, at least a little.)
A part of this might well involve special deals between large companies involving specialized infrastructure. There are such things now involved with movie streaming. Google's plans involve the infrastructure part of that already. What happens to net neutrality?
Once game streaming becomes mature, then it's only a matter of time for App streaming to follow. We'll be back in the mainframe days, just with much larger companies with much larger multi-box cloud "mainframes", commanding the attention of a much larger portion of the population and the culture. "The computer is your friend. Trust the computer!"https://en.wikipedia.org/wiki/Paranoia_(role-playing_game)
Counterintuitively, the market that Network Next is making shifts us closer to actual net neutrality than relying on the benevolence of FCC or Comcast by putting power into the hands of content providers where previously they were at the mercy of last-mile ISPs.
If businesses can collectively bid for performance from private networks to offset actions by last-mile ISPs to cut costs and reduce service, then they also have the power to organize a general strike by refusing to bid for performance from traffic from specific ISPs until they work out better rates throughout the value chain.
So I see this as a mechanism to restore the spirit of net neutrality despite the best wishes of the FCC.
Bandwidth has followed an improvement curve since the early days of the internet, though it has slowed down a lot lately.
Streaming movies became viable despite the slowing rate of bandwidth improvement. We had 30 kbps modems in the 90s, then 8 Mbit ADSL1 in 1998, 24 MBit ADSL 2+ in 2003s, 40 MBit DOCSIS around the same times. If the speeds had continued to improve at the same exponential curve, consumers would have 100+ gig broadband by now.
The same is true about datacenters, the bandwidth per compute capacity available to servers has decreased for a while.
Bandwidth is an easier problem than latency, because it's not limited by speed of light.
Great reading this, I just want to drop a line to say that I have been working to solve precisely this problem -- for gaming but also other applications -- at the root. I have been working on a paper on this, here is the pre-proposal I published a whiled ago: https://docs.google.com/document/d/1xWaSB-3VSOMaiyxfjDqTalDC...
-- a lot of progress has been made since and I am looking for funding/grant before moving to publishing the actual paper and moving to implementation.
The root of the issue is that the IP protocol is incomplete, in that is relies on private agreement and ISP servicing that cannot react to a real-time, real-world demand. IPv4 had some attempts at public QoS features that were mostly dropped in IPv6 because people realized it didn't work in practice.
The only way to fix this for real is to merge ISP servicing and peering into a per-packet peer-to-peer network market. This solves a great many issues at the same time and essentially achieves the promise of mesh networking and more.
If you would like to help, please drop me a line: [email protected] (sorry no website at the moment).
This was an awesome write up. The most interesting thing for me was learning that bottlenecks are not on the edge, but across the backbone because T1 carriers are optimizing for low cost, not low latency.
The thing I'm still a bit confused about is: is this a limitation with how their networks fundamentally work, or an artefact of my product choice when I buy a relatively cheap home Internet service? i.e. if I want a specialist low-latency and low-jitter link, could I not just pay for those benefits by purchasing a business-grade service? Or is there no distinction once the data gets onto a peered-network?
Or is the idea here that I don't have to cough up for all the benefits of that superior product when I only want some of them and only some of the time?
And this whole thing does rely on the bottleneck not being at then edge, which I don't see any proof of or reason for. EDIT: is there any proof/data to support this?
Modern games have a problem that I believe is a direct consequence of consoles. The lack of a server browser.
Just let me choose a server to which I have a good connection and I will be much happier. Instead of dropping me in games with random people all the time with random connection issues.
One of the main reasons I don't play fps on the consoles is missing this. (And there are social advantages of playing with the same 'random' people as well).
Having a server browser does not solve all issues mentioned in his presentation, but it does at least inform the player of the connection before they join a game.
Random players can work with proper servers too, and many games just do that.
The problem with server browsers, or user hosted servers are/were arbitrary rules enforced by the admin of that server. It's what killed Black Ops 1 on the PC.
This is epic. Great to see that someone who's been such a good support to the gaming community with his excellent learning resources is also being successful in his business.
It's also interesting to see these "private internet" endeavours have popped up to solve centralization issues on the internet. Streaming sites like Youtube and Netflix have solved it in similar way, striking deals with ISPs, as have the big CDN's, simply putting their machines at the ISP's datacenters.
Network.Next is solving this for the gaming domain, but there is a more general preferential routing on the internet problem this idea could be expanded to.
As someone who works in the games space, I find this to be an interesting proposition. My primary concern revolves around similar 'net neutrality' aspects - more and more of the internet is becoming pay to play, this just further entrenches that theory if you want the best networking for your games as a studio.
That's a good observation. Given the alternative between bidding for performance from Network Next vs paying some flat fee to Comcast, et al. directly, I'd go with NN and let them deal with Comcast.
... and then maybe add "faster gameplay" power ups to our loot boxes (kidding).
Games need some low latency data, but not much. Player actions, character movement, and maybe bullets need low latency. Asset downloading does not. If you had a dial up modem running over a non-packet phone network, and didn't overload the buffering, you'd have better latency than the Internet.
Many games make this distinction already. The latency-critical stuff goes over UDP, and it's limited to data where a missing packet is superseded by the next packet. Nothing is retransmitted. Bulk data goes over TCP, with reliable retransmission but more delay.
It's too bad ISDN died off. 64Kb/s end to end, not packetization delay, and no jitter. The ideal gamer network would be an ISDN connection and a IP connection in parallel.
If we had QoS systems where 1% of your maximum data rate could be at high priority, this, and the VoIP problem, would be mostly solved.
Glenn, if you're still hanging around -- can the shader be written to account for the flow of P2P network traffic? Let's say I'm using an old-school configuration of a centralized lobby server that simply connects players to each other and the players connect directly over UDP?
Clever concept. I wonder if they'll publish any more information on how their service works. Things like what this 'route shader' is that people can read without having to go through Email-Our-Sales-Team shit.
It's a custom bidding script that tells their real-time auction platform how much you want to pay for route improvements for a particular user session.
Typically, shaders are used to move vertices in 3d meshes or color pixels in the GPU. Network Next is just using the term in a creative way to help connect to their audience, but it's nothing different than a bidding model.
This reads like a traffic targeting system ala FB's cartographer (c10r) bolted onto an ad exchange (continuous bidding etc.)
Interesting to see if they can keep the 10-second update intervals while scaling to more users.
Curious to see if the market actually wants to pay for marginal improvements in gaming latency. I would have expected something like this targeting VoIP to be more lucrative.
This sort of stuff has been around for a while, it helps some, but realistically your game needs to be designed from the ground up with latency and stutter in mind.
If you do this, you wont need the complexity and expense of this sort of thing.
First you need client side prediction and smoothing. Next, you must design your game so it follows certain very strict rules about player interaction. These rules allow the players to feel the have real time synchronization when in fact synchronization is lazy.
The specific rules required are different for each game, and must be nailed down before game design begins.
Are you teaching game networking to Glenn Fiedler? You probably learnt it from him, or someone who read his articles!
Now seriously: client side prediction is not magic and it always introduces its own problems, like getting killed after getting into cover (especially in low TTK shooters) or the opposite: bullets not registering when if they hit in your screen.
Client side prediction is sleight of hand. It will never replace actually perfect network conditions and will still feel better the lower your latency is.
This could be useful for long-distance remote desktop and SSH too. Curious how to build on top of it for diverse apps though (use it as a VPN somehow?)
Right now we are an SDK that embeds in the client, server and backend for applications. It is open source too (BSD license), but not yet publicly released. Will be released soon, I look forward to seeing what people do with it, outside of games.
This is true, but without a marketplace and somebody paying, how can we tell whether the QoS requests are legitimate? I mean, “yeah, my traffic is always real-time priority”. People just will cheat it... so Network Next is a way of doing this, while saying, OK, for the truly real-time traffic, applications are willing to sponsor it. Since it’s paid, it won’t be exploited.
[+] [-] AntiRush|7 years ago|reply
https://steamcommunity.com/groups/steamworks#announcements/d...
They started using it in some of their games a couple of years ago (Counterstrike Global Offensive and Dota2 if I recall correctly).
[+] [-] Impossible|7 years ago|reply
[+] [-] stcredzero|7 years ago|reply
No amount of good netcode that you write can compensate for this.
The problem is the internet itself.
The internet doesn’t care about your game.
The Internet did change, such that streaming movies became viable. Given that games are now bigger than movies, could it be argued that that the network lag we're seeing is the industry lagging behind the culture? (Which it generally does, at least a little.)
A part of this might well involve special deals between large companies involving specialized infrastructure. There are such things now involved with movie streaming. Google's plans involve the infrastructure part of that already. What happens to net neutrality?
Once game streaming becomes mature, then it's only a matter of time for App streaming to follow. We'll be back in the mainframe days, just with much larger companies with much larger multi-box cloud "mainframes", commanding the attention of a much larger portion of the population and the culture. "The computer is your friend. Trust the computer!" https://en.wikipedia.org/wiki/Paranoia_(role-playing_game)
[+] [-] politician|7 years ago|reply
Counterintuitively, the market that Network Next is making shifts us closer to actual net neutrality than relying on the benevolence of FCC or Comcast by putting power into the hands of content providers where previously they were at the mercy of last-mile ISPs.
If businesses can collectively bid for performance from private networks to offset actions by last-mile ISPs to cut costs and reduce service, then they also have the power to organize a general strike by refusing to bid for performance from traffic from specific ISPs until they work out better rates throughout the value chain.
So I see this as a mechanism to restore the spirit of net neutrality despite the best wishes of the FCC.
[+] [-] fulafel|7 years ago|reply
Bandwidth has followed an improvement curve since the early days of the internet, though it has slowed down a lot lately.
Streaming movies became viable despite the slowing rate of bandwidth improvement. We had 30 kbps modems in the 90s, then 8 Mbit ADSL1 in 1998, 24 MBit ADSL 2+ in 2003s, 40 MBit DOCSIS around the same times. If the speeds had continued to improve at the same exponential curve, consumers would have 100+ gig broadband by now.
The same is true about datacenters, the bandwidth per compute capacity available to servers has decreased for a while.
Bandwidth is an easier problem than latency, because it's not limited by speed of light.
[+] [-] milesward|7 years ago|reply
[+] [-] tarikjn|7 years ago|reply
The root of the issue is that the IP protocol is incomplete, in that is relies on private agreement and ISP servicing that cannot react to a real-time, real-world demand. IPv4 had some attempts at public QoS features that were mostly dropped in IPv6 because people realized it didn't work in practice.
The only way to fix this for real is to merge ISP servicing and peering into a per-packet peer-to-peer network market. This solves a great many issues at the same time and essentially achieves the promise of mesh networking and more.
If you would like to help, please drop me a line: [email protected] (sorry no website at the moment).
[+] [-] tschwimmer|7 years ago|reply
[+] [-] Jare|7 years ago|reply
[+] [-] kingosticks|7 years ago|reply
Or is the idea here that I don't have to cough up for all the benefits of that superior product when I only want some of them and only some of the time?
And this whole thing does rely on the bottleneck not being at then edge, which I don't see any proof of or reason for. EDIT: is there any proof/data to support this?
[+] [-] Insanity|7 years ago|reply
Just let me choose a server to which I have a good connection and I will be much happier. Instead of dropping me in games with random people all the time with random connection issues.
One of the main reasons I don't play fps on the consoles is missing this. (And there are social advantages of playing with the same 'random' people as well).
Having a server browser does not solve all issues mentioned in his presentation, but it does at least inform the player of the connection before they join a game.
[+] [-] gsich|7 years ago|reply
The problem with server browsers, or user hosted servers are/were arbitrary rules enforced by the admin of that server. It's what killed Black Ops 1 on the PC.
[+] [-] the_mitsuhiko|7 years ago|reply
[+] [-] gafferongames|7 years ago|reply
[+] [-] NelsonMinar|7 years ago|reply
One thing that's interesting is how different these services need to be in different countries.
[+] [-] grenoire|7 years ago|reply
[+] [-] tinco|7 years ago|reply
It's also interesting to see these "private internet" endeavours have popped up to solve centralization issues on the internet. Streaming sites like Youtube and Netflix have solved it in similar way, striking deals with ISPs, as have the big CDN's, simply putting their machines at the ISP's datacenters.
Network.Next is solving this for the gaming domain, but there is a more general preferential routing on the internet problem this idea could be expanded to.
[+] [-] gafferongames|7 years ago|reply
[+] [-] AlimJaffer|7 years ago|reply
[+] [-] politician|7 years ago|reply
... and then maybe add "faster gameplay" power ups to our loot boxes (kidding).
[+] [-] Animats|7 years ago|reply
Many games make this distinction already. The latency-critical stuff goes over UDP, and it's limited to data where a missing packet is superseded by the next packet. Nothing is retransmitted. Bulk data goes over TCP, with reliable retransmission but more delay.
It's too bad ISDN died off. 64Kb/s end to end, not packetization delay, and no jitter. The ideal gamer network would be an ISDN connection and a IP connection in parallel.
If we had QoS systems where 1% of your maximum data rate could be at high priority, this, and the VoIP problem, would be mostly solved.
[+] [-] gsich|7 years ago|reply
[+] [-] robterrell|7 years ago|reply
[+] [-] gafferongames|7 years ago|reply
[+] [-] VectorLock|7 years ago|reply
[+] [-] politician|7 years ago|reply
Typically, shaders are used to move vertices in 3d meshes or color pixels in the GPU. Network Next is just using the term in a creative way to help connect to their audience, but it's nothing different than a bidding model.
[+] [-] jauer|7 years ago|reply
Interesting to see if they can keep the 10-second update intervals while scaling to more users.
Curious to see if the market actually wants to pay for marginal improvements in gaming latency. I would have expected something like this targeting VoIP to be more lucrative.
[+] [-] simonsays2|7 years ago|reply
First you need client side prediction and smoothing. Next, you must design your game so it follows certain very strict rules about player interaction. These rules allow the players to feel the have real time synchronization when in fact synchronization is lazy. The specific rules required are different for each game, and must be nailed down before game design begins.
[+] [-] kaoD|7 years ago|reply
Now seriously: client side prediction is not magic and it always introduces its own problems, like getting killed after getting into cover (especially in low TTK shooters) or the opposite: bullets not registering when if they hit in your screen.
Client side prediction is sleight of hand. It will never replace actually perfect network conditions and will still feel better the lower your latency is.
It's necessary, but not sufficient.
[+] [-] mkj|7 years ago|reply
[+] [-] gafferongames|7 years ago|reply
[+] [-] ndmrcf|7 years ago|reply
[+] [-] petermcneeley|7 years ago|reply
[+] [-] gafferongames|7 years ago|reply
[+] [-] peteretep|7 years ago|reply
[+] [-] gafferongames|7 years ago|reply
https://technology.riotgames.com/news/fixing-internet-real-t...
https://technology.riotgames.com/news/fixing-internet-real-t...
[+] [-] openloop|7 years ago|reply
[deleted]
[+] [-] openloop|7 years ago|reply
[deleted]