top | item 8399767

Making Fast-Paced Multiplayer Networked Games Is Hard

109 points| Impossible | 11 years ago |gamasutra.com | reply

53 comments

order
[+] tinco|11 years ago|reply
The hardest part is that most game engines are not designed to be networked in this way. Find an open source physics engine that natively supports teleportation, easing and prediction, they're not there. UE4 is the first engine I've used that seems to have a very nice multiplayer API, and it's only been out for indie developers for a couple of months.

So the hard part is not devising the networked scheme, it's building a whole game engine (or thoroughly modding one) afterwards, at least in my experience.

I was working on a multiplayer racing game project (like GTA2), and my approach was to run 2 physics engines in parallel. One physics engine would always be authoritive, and be in sync with the server, but because of the lag would always be a frame or two (or more) behind. The other would be working on predicting what is going to happen. Every frame snapping back to the authority, and then applying the (predicted and user) inputs over that.

The actual position the user would see would be an averaged position between the current predicted position, and its previous predicted position (to prevent too much jitter/snapback).

If you've got a better scheme to do networked physics I'm all ears :)

[+] jmorrison|11 years ago|reply
First time post from long-time lurker. Be kind.

Back In The Day (just to show how ancient I am), I personally debugged the first peer-to-peer (vs client/server) networked simulation protocol on the DARPA SIMNET project: http://en.wikipedia.org/wiki/SIMNET

Interesting stories notwithstanding (anybody else here ever got to drive/fire a tank, and have fellow engineers get dosed with CS gas, as part of a software engineering job?), when time came to standardize this research protocol as: http://en.wikipedia.org/wiki/Distributed_Interactive_Simulat...,

we tried to get them to generalize the protocol thusly: http://www.google.com/patents/US5623642

The work was supported by a DARPA small-business project, so the IP was left with the company in hopes of commercializing it, and maximizing dissemination. The attempt to get the ideas incorporated as part of the standard was singularly unsuccessful. More's the pity, as I think it would have really helped make the simulations more capable of simulating a wider array of physical phenomena.

The commercial uptake was equally unsuccessful. I experienced some culture shock when proselytizing (again, unsuccessfully) at game development conferences.

[+] aaasux|11 years ago|reply
> UE4 is the first engine I've used that seems to have a very nice multiplayer API

I'm working on a triple a ue4 title and i don't like ue4s replication system at all. It's a step towards making the networking concerns invisible, and that always seems like a mistake to me. If the protocol for keeping clients in sync were more explicit, it would be easier to tightly control what and when things get sent over the network.

[+] archagon|11 years ago|reply
Interesting. After reading this Gamasutra article[1] on how online multiplayer can literally add years to your development time, I got a little scared. Will UE4 allow developers to avoid this setback?

(On the other hand, developers like Carmack and Michał Marcinkowski only took a matter of months to add it to their games, and they were among the first. So maybe it's not as big of a deal as it seems.)

[1]: http://www.gamasutra.com/view/feature/217434/the_ups_and_dow...

[+] TheLoneWolfling|11 years ago|reply
Not so much an alternative as an implementation, but this is where immutable structure-sharing data structures become very useful.
[+] Cakez0r|11 years ago|reply
Have had this problem too with making a racing game. We ended up making remote entities move independently of the physics engine (I.E. cars for remote players were just static objects that had their positions manually updated). This worked to some extent, but we had to do some fudging to make collisions work somewhat realistically.
[+] ggambetta|11 years ago|reply
Another series of articles I wrote ~4 years ago about the same topic, but perhaps more detailed and with a live demo: http://www.gabrielgambetta.com/fpm1.html

The title of this article sounds vaguely familiar ;)

[+] cinskiy|11 years ago|reply
Your articles were SUPER helpful when I added multiplayer to my game, you may be the reason I succeeded in it, alongside Valve documentation. Thank you so much for writing them!
[+] teddyh|11 years ago|reply
See also:

Distributed Virtual Reality – An Overview, Bernie Roehl, 1995: https://ece.uwaterloo.ca/~broehl/distrib.html

Characteristics of UDP Packet Loss: Effect of TCP Traffic, Hidenari Sawashima, Yoshiaki Hori, Hideki Sunahara, Yuji Oie, 1997: http://www.isoc.org/inet97/proceedings/F3/F3_1.HTM

I Shot You First: Networking the Gameplay of HALO: REACH David Aldridge, 2011: http://www.gdcvault.com/play/1014345/I-Shot-You-First-Networ...

[+] rlefebvre|11 years ago|reply
This last link is a gem. They refer to the TRIBES Networking Model in the talk, explained here also: http://gamedevs.org/uploads/tribes-networking-model.pdf

There is a beautiful implementation of this in the old OpenTNL that seems to be still alive in the newer Torque engine: https://github.com/GarageGames/Torque2D/tree/master/engine/s...

The original article does not discuss the aspect of scaling fast-paced games to handle large numbers of replicated objects and players under constrained network conditions but I found the approach proposed in the original TRIBES model to be the most (only?) credible so far. I don't see support for this in any of the popular, modern network libraries (RakNet, enet, lidgren, ...). They all seem to have taken the 'multiple reliable channels' direction but that just doesn't seem to scale to many connected players the way the TRIBES model does.

I would love to hear from anyone who has had experience with that!

[+] hellbanner|11 years ago|reply
In specific contexts like a 1v1 fighting game, you can get clever. GGPO*. Every move in the game has a specified "startup" time which is generally a) too fast to react to & b) consistent.

When you jump & throw a punch you send the frame along with your attack. My client speeds up your character's game state to match what your client experiences.

http://www.gamasutra.com/view/news/177508/The_lagfighting_te...

[+] fredley|11 years ago|reply
Another reason [TagPro](http://tagpro.koalabeast.com) is an excellent game. It's implementation of this stuff is great - with a reasonable connection my ping is usually under 10ms, and the game is often won and lost with the smallest of margins making this kind of thing very important.

Not to mention the excellent gameplay and mechanics - it's very simple to understand and learn, but very very difficult to play well!

[+] Luc|11 years ago|reply
Not saying this is the case, but 10 ms is a very low ping that could hide a lot of issues with the implementation.
[+] no_future|11 years ago|reply
Fascinating, I've been interested in what kinds of serverside software real time online games(WoW, Call of Duty, etc) run for a while now, but haven't been able to find much info on it.

Seems that they would need to be optimized for many very high stress concurrent connections with as little latency as possible, so I'd guess that they run C/C++ and/or Java? Do they use something like Websockets, or do UDP/TCP or some other persistent two way connection method? There don't seem to be any publicly available libraries focused on this kind of thing, so I assume that they develop their networking stuff mostly in-house.

Anyone that knows about this stuff willing to share?

[+] pjc50|11 years ago|reply
Generally it's UDP. Nobody would use websockets unless they had to run in a browser or similar VM.

C++ would be the default choice, but Eve Online uses Stackless Python which seems to have worked well for them.

The Quake3 server source (pure C) is available for your inspection, and there's a good analysis of how it works here: http://fabiensanglard.net/quake3/network.php

[+] Igglyboo|11 years ago|reply
Unless it's in a browser it's not using Websockets. TCP where the overhead is acceptable or packetloss is not, turn based strategy, slower pased role playing games. UDP where minimal lag is needed, FPS's mainly.
[+] Cakez0r|11 years ago|reply
WoW uses TCP the last time I checked (the overhead of TCP is acceptable for an RPG in most cases). I believe Diablo 3 also uses TCP and uses protobuf.

EVE online have some very good tech posts and are generally quite open about their technology (although it's mostly about hardware and infrastructure). Valve also have some good articles on networking for Source games (someone posted the link elsewhere in this thread).

[+] Dove|11 years ago|reply
Just a few months ago, I did an overhaul of the network code for DXX-Retro[1] -- a source port of Descent. Descent worked much better over the (laggy, lossy, bursty) net than DOOM, and -- if you're looking to mimic old school games -- is really worth studying.

Some quick technical commentary:

The bandwidth calculation in the article is predicated on sending updates at 60 hz -- or what we'd in the Descent community call 60 PPS. Probably because the screen is refreshing at that rate? It's unnecessary. You want a high framerate for control of your own game, but you don't need it to see enemies smoothly. Remember, movies only run at 24 FPS. ;)

The highest I allow in Descent is 30 PPS, and really . . . it's seen as a luxury. 20 is generally fine. Sometimes I play games at 10, and there you can definitely tell -- even with the smoothing (it sends velocity and acceleration, too, and interpolates) -- but it's perfectly playable.

Which is something worth remembering. With old school games, "crappy but perfectly playable" is actually all they were able to achieve.

No, the physics engines aren't perfectly locked, and how tolerable this is will depend on your game. In a simple FPS, this really isn't a big deal. You just lag lead (compensate in aim, both for the motion of your enemy, and the fact that the data is old). :)

Some of how he's proposing to send the data is wasteful. He initially proposes sending "is weapon equipped" and "is firing" as 1 byte a piece -- and later concludes he can get those in 1 bit a piece. True. That's a savings by a factor of 8 right there. But I can do you one better -- don't send it every update. How often do those states really change?

In Descent, we have two classes of data: position and orientation data that's sent at a steady rate (10-30 PPS), and event data that's sent . . . whenever it happens. Equipping weapons is definitely the second type; it happens extremely rarely -- like, seconds pass between weapon switches. :)

One thing that may surprise you. We don't send "is firing" as a flag with every packet -- we send one packet per shot taken! Two reasons for this: one, it's actually lower bandwidth. Shots fire rarely -- our fastest gun fires 20 bullets per second, but the next fastest fires six. Then five, then four, then . . . one shot every two seconds. And you're not always firing, either. Sending one packet per shot saves bandwidth. But it also increases accuracy! We attach those shot packets to a position and orientation update, so -- even if the receiver has incorrectly interpolated your position -- the shot goes exactly where you intended it to. This is very important. :)

As a player (and a programmer -- but mostly a player), I'm concerned about the article's conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".

Ok, yes, it does. But the thing is, you have a choice here. You can present your players with something pretty and smooth that is fundamentally a lie, or with something jittery that is the best knowledge you have about where the enemy is. This is a fundamental tradeoff: versimiltude or accuracy. You can't have both.

My players overwhelmingly prefer accuracy. To them, the avatars on the screen are targeting aids, and they understand that the data is old and a bit lossy and bursty sometimes, and they want the best data possible so they can take the best shot possible. :)

I suppose your mileage may vary by audience. Mine's been playing this game 20 years and "crappy but playable" is both normal and good to them. :)

But -- I can't imagine this would be different in another FPS -- taking a shot that you know is good on your screen, and have it not hit the other guy . . . that's rage-inducing right there!

Yeah, networked games are hard. For sure. And there are fundamental hard tradeoffs involved in engineering them. For sure. But it's an interesting problem, and also worth it. :)

[1] https://github.com/CDarrow/DXX-Retro

[+] jl_2014|11 years ago|reply
I wrote the multiplayer code for Descent 2 and Descent 3 (as well as the graphics engine for D3). Although I can't remember all the details because D2 was back in 1995(!), D2 had a significantly overhauled network layer from D1. Some examples: Short packets, where position and orientation data was quantized down into single bytes instead of floats, lower packets per second (you could go down to 5 PPS if I recall correctly). We were also the first game where if the 'master' dropped out the game another player would become the master in a hand-off scheme that was a bit complex. The master controlled things like notifying other players of new players, end of level stuff, etc. We had to sweat every byte because we were trying to have 8 players with low lag over a typical 28.8 baud modem.

D3 changed the overall feel of the Descent series, mostly because we introduced a terrain engine and that had a cascading effect on the rest of the game. The speed of the ship, for example, had to be significantly increased because if we used the ship speed from D1/D2 then going out into the terrain felt like you were stuck in molasses.

Working on those games was incredibly fun. Ah, to be 25 again.

[+] jephir|11 years ago|reply
> As a player (and a programmer -- but mostly a player), I'm concerned about the articles conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".

I agree, predicting remote objects causes more problems than it solves. Any remote player input that affects the object causes noticeable position popping when the local client receives the update.

The Source engine has a better solution - instead of trying to predict remote objects, delay rendering by 100ms and interpolate the received updates. This makes the position updates appear smooth without any stuttering. However, now the client lags 100ms behind the server.

The server has to "undo" this lag when the player wants to execute a request (like shooting). It does this by rewinding the world state by 100ms + client ping when executing the request. This makes it so that a client's actions appear to execute without any lag on their view (i.e. they don't have to shoot ahead of their target).

This causes some temporal anomalies, for example, players can shoot you around corners because they see your position 100ms in the past. However, most players seem to prefer this over having to constantly shoot ahead of their targets to compensate for prediction errors.

[+] Dove|11 years ago|reply
I forgot to mention -- about PPS -- most modern high-speed connections can handle a 30 PPS 8-player Descent game just fine, but there are a few players stuck in rural areas with very old connections, who can't. I'm guessing -- from the 8kB/s limit he set for himself -- that the author's audience sees a similar distribution.

The thing is, you don't have to make those connections symmetric. One of the features I'm working on for the next Descent Retro release is allowing players to set a PPS limit for themselves, based on their own available bandwidth, that will both reduce their PPS upstream, and request that their opponents reduce the rate of the incoming packets.

This means people with fast connections can have high-quality 30 PPS game interactions, and people with slow connections can still play with them. They have to put up with 8 PPS (or whatever), but it's preferable to not playing. :)

[+] Dove|11 years ago|reply
Another thought! From experience . . .

If you're worried about popping, make sure you drop out-of-order position packets. In my experience, the 33 ms of player input between normal packets is negligible. You get popping in theory, but you can't see it. I haven't seen any for a long time, and the smoothing Descent does is both predictive and minimal. But a position packet arriving 100 ms late . . . that's a pop. When I inherited this code, ships did a lot of jittering; when I eliminated out-of-order packets, it pretty much all went away.

YMMV, of course. Descent ships are slow by FPS standards.

[+] Mithaldu|11 years ago|reply
> Remember, movies only run at 24 FPS.

Please do not say such a thing without also including the footnotes that movies get away with that by:

1. having no interaction, so the audience has no way to feel the delay between frames; as opposed to interactive media where the delay is easily felt through the latency of input to action on screen

2. having interpolation built in naturally into the camera, i.e. motion blur being entirely free

[+] digikata|11 years ago|reply
If you're sending a packet per shot, what happens if a packet is dropped? Or are you using a TCP connection, or ack'ing every shot packet?
[+] notastartup|11 years ago|reply
this is so cool. I remember playing Descent on the Playstation. This was the first game I bought. I didn't like it at first because the guy that sold the playstation said this is sorta like flight simulator but apparently you never get to fly outside the mines to my great disappointment.

But I've put in hours on the game, even though it scared the crap out of me (flashing and dimming lights with alarm that tells you the mine is gonna blow and you haven't even figured out where the exit is claustrophobically genius)

To see so much work going into it in the community is awe inspiring.

[+] Animats|11 years ago|reply
This problem was first described in Farmer and Morningstar's "The Lessons of Lucasfilms' Habitat" (1990), which was a MMORPG running on Commodore 64 machines over 300 baud dial-up modems. They referred to this problem as "surreal time".

Back then, they had latencies of 100ms to 5000ms. The original article here says they can get latencies of 200ms from 99% of XBox connections. Not much progress in latency in 25 years. I can understand the LAN party nostalgia.

[+] toqueteos|11 years ago|reply
I've been using Bolt [1] for Unity3D for a fast-paced game and surprised me how far I got with almost no knowledge of the library. It's UDP all the way and super explicit about everything.

[1]: www.boltengine.com

[+] moomin|11 years ago|reply
As apparently, is being on the right side of the GamerGate fiasco. (They're down at the moment.)
[+] benihana|11 years ago|reply
I always found this article about how the Source Engine handles networking fascinating: https://developer.valvesoftware.com/wiki/Source_Multiplayer_...

Really shows how hard just a few of the challenges of making a multiplayer game are.

[+] squeaky-clean|11 years ago|reply
I also love the Source Engine networking. Another one I like is The Torque Game Engine. It has a very similar networking model, I think it may even be a little better (it's the engine that powered the original Tribes game).

I remember buying a license to the original TGE years ago and digging through the networking code, it was great. They've open-sourced the latest version of Torque under the MIT license, and I believe the networking code is nearly identical to the original code used for Tribes.