From another Ensemble Studios interview with Gamasutra (emphasis mine):
"6. We did not plan for a patch. The version 1.0a patch, even though it was a success, was problematic in that as a company we had not planned for it. THE GENERAL ARGUMENT IS THAT IF YOU KNOW YOU ARE GOING TO NEED TO RELEASE A PATCH, THEN YOU SHOULDN'T BE SHIPPING THE GAME IN THE FIRST PLACE."
(Author of the original article here) It was really pretty naive - nobody would plan for an RTS now without planning a series of patches to do the inevitable adjustments. Even for AOK we planned to patch and adjust. The 'general argument' was a MS publisher stance as quoted by Matt Pritchard - to do a patch in those days through the MS system meant a lengthy and expensive full test process, rollout, creation of a patch system, etc - so it was something you planned and budgeted for. The concept of 'day-one-patch' would have been pretty horrific. Patching is now something you integrate, plan for and expect - because we aren't shipping on gold masters to a printing company.
Which is a ridiculous statement - I remember the number of 90's games that were utterly unplayable in that they'd crash constantly or do other ridiculous things (The remake of Temple Of Elemental Evil would actually delete your C: drive upon uninstall). I feel like things haven't gotten worse as far as the need for day 1 patches, they've just gotten more pragmatic on their deployment.
And in 2018 with Internet connections a bit better than a 28.8k modem and hardware slightly more powerful than Pentium 90, the HD remake of AoE2 fails to deliver a reliable multiplayer gameplay, having lag of multiple seconds and at least as many out-of-sync bugs as the original game did 20 years ago. It should be closer to 150 than 1500 archers with the HD edition.
This is one of the many reasons why I refuse to buy AoE2. The "new" version does not offer multi-platform support, it does not offer a stable multiplayer, it requires half a gigabyte of RAM instead of the previous 32MB, and it still costs as much as I pay these days for some of the smaller but brand new games (not one they made profit on for almost twenty years already without any maintenance).
It turns out the game is synchronized by each player having the commands for each 200ms "turn" a couple of turns in advance, and then playing the actions so that the same happens on all player's machines. That includes sending random seeds around. And then there's a load of provisions for lost packets, slow machines and thus forth.
Is this how most games do it? I would think something like WoW couldn't do this, and indeed sometimes I'd see glitches where a character would blink (like the spell) to somewhere new.
Welcome to the world of game network programming :) it's an exciting place!
Yes, you have to be more-or-less real time, so you must compensate for latency, unreliable/slow connections, jitter, etc. If you're used to the web and the request/response model, you have to throw all of this out the window. The 200ms delay "hack" is pretty much standard practice, the window will differ from game to game (smaller in FPS's), but it's usually there.
Most games use UDP, since transmission of any single package doesn't have to be reliable, and in case some packets are lost, it's cheaper to re-calculate the state diff and resend one slightly larger package instead of two (or more) standard-size packages. Sometimes this can result in a "blink".
With sending around seeds and other "secret" data, you have to make a trade-off, since sending too much allows for cheating (map hacks, wall hacks, etc), but sending too little will create unpleasant surprises (enemy "teleporting" from around the corner).
Also often it's cheaper to run most of the calculations on the client (even including the critical stuff like hit tests, damage calculation, etc), and only occasionally verify the results on the server - especially in MMO's. Clients that are found suspicious get verified more often, and eventually get penalized / kicked.
Source: never actually wrote a networked game, but love reading about this stuff.
No, most games don't do it this way now. When I created this type of networking in 1994 it was to solve the particular problem of lots of units and low bandwidth. Now where bandwidth is much less consideration games typically use an authoritative server (even if that server is a 'headless' process that runs on a machine with a client). All clients send turns to the server and it sends out the authoritative results to all the clients. Unreal and Unity both have some documentation I believe on how their networking works at the high level - they are really adequate for most cases.
It depends on the genre and the specific game. RTSs can do it this way and some fighting games do, although some fighting games use a variation that involves predicting player input and rewinding the game state on misprediction in order to reduce perceived latency. FPSs tend to use a totally different model that involves sending only the most recent relevant information to each player and trying to balance server authoritative cheat-prevention with allowing clients a little leeway in order to make the twitchy gameplay feel better. There's a great paper on Tribes' networking model that is still largely relevant for FPSs.
I believe wow (and most MMO's) instead take input from each player and run the simulation completely on the server. Although often enemy and NPC positions aren't synchronized exactly between players. There's also much more tolerance for latency in crowded situations - it doesn't really matter if the ~30 other player characters lag behind their actual position in a town by even 5 or 10 seconds, you only need players in your party or who are fighting against you to be low latency.
My understanding was that WoW would do essentially what's described here for the player's character and entities in the immediate area (simulate them for the next few frames), but the server also maintained a view of the world. The client would occasionally sync with the server, and that's when you would see the blinks.
If you're curious, this[0] is an actively maintained implementation of a WoW server.
I worked on a big commercial MMO engine for about a decade.
In our implementation, the player runs ahead on the client (client autonomous) but server verified (actions replayed on the server). The new authoritative server position for the player is sent back to the client, and the client replays whatever player movements last made since the response to a movement is heard back from the server, with the player resynchronizing to that point in time where you moved against server objects and playing forward from that point transparently; the client (and server) maintain a short queue of movement history for each moving entity. Thus, if you ran into something on the server but it didn't obstruct your movement much, you would tend to blip much less. The physics framerate was very low compared to graphics framerate, and there would be some degradation in updates received on the client by distance as throttled by the server based on area of interest management. Position updates represented most of the bandwidth of the game. Everything is UDP based with different forms of reliability options on top.
NPCs were "server authoritative" and their actions are replayed on the client. Interpenetrations are resolved via rigid body physics resolution on the client if something blips, but the server is ultimately the source of truth (nothing can interpenetrate on the server), so if a rigid body resolution on the client doesn't resolve some condition, the eventual resynchronization of player position from the server would make it happen at some point.
It worked out pretty well most of the time; certainly you can construct many scenarios where it goes awfully bad from the perspective of a client (on the server everything is always fine), but we preferred the illusion of immediate feedback/low latency versus this queuing up everything to take place N milliseconds in the future, and we didn't need exact reproducibility between clients, just eventual (and hopefully pretty quick) consistency.
> Part of the difficulty was conceptual -- programmers were not used to having to write code that used the same number of calls to random within the simulation
Can anyone grok this? I can't see why this (each player's simulation making the same number of calls to random) would ever not be the case if all players are running the same patch of the game and are executing the same commands.
(Original author here) - here was a really hard-to-find bug - in some instances more than one quantity of fish could be placed in the same location - that meant the game would work fine until someone fished that same fish the second time and the fishing boats would diverge in the different simulations. The world sync check only counted the tile contents so we didn't see it. - There were actually two RNG's in the game, one was synchronized with the same start seed on all machines (basically the same random pool) for combat and whatnot - the other was unsynchronized and used for animation variance, etc -things that weren't gameplay related. Not knowing when to use one of these specifically (e.g. animals facing seems like animation but definitely affected gameplay if they could be hunted) could alter the code path and cause an out-of-sync condition.
What they mean is that if you're doing anything that samples random, don't do it inside of something like a loop whose length is dependent on the local game state that isn't sync'd on the current turn; especially in the graphics engine for doing particle effects or picking random animation frames. Like if you sample random every animation update frame to pick which fire graphic to use on a torch or bonfire (very common), but you suffer graphic slowdown and sample it not enough or too many times, that could vary between clients unintentionally.
Once it comes time to simulate that next turn, if you have something different than other clients because of a missed update or graphics lag, even if object positions and the random seed going to be "fixed" by another turn update, all future interactions with any objects that over- or under-sampled random will be wrong and could create further sync problems.
For anyone complaining about aoe2 lag, it is mainly just the version on steam.
if you apply the compatibility patch, you can load your steam copy onto voobly for free and play with next to no lag. The difference is night and day plus better features with Voobly.
>For RTS games, 250 milliseconds of command latency was not even noticed -- between 250 and 500 msec was very playable, and beyond 500 it started to be noticeable.
I wonder if the rise of competitive RTS has changed this guideline. In SC2 people will start to complain if their ping to the server is more than about 130ms, and anything above 150ms starts to become painfully noticeable.
In Brood War being able to play on "LAN Latency" was a always a huge deal---to the point that unauthorized third party ladder systems enabled players to play at "LAN Latency" even when battle.net official didn't support it.
Human cognitive reaction times are a lot mroe latent than connections are now. The quality of play percieved by the players in the studies I did back in the day was much more about consistency of responsiveness and not that direct number of milliseconds. Best possible Human reaction time for cognitive tasks is still around 250msec for most players - it would be great to see updated information for tournament players (who are a different class of player) on what their actual perception-to-action time is in their favorite games. AOK and beyond code used an adaptive scaling system to go faster when the network would reliably move packets more quickly - so it would auto-adjust to 'lan speed' (actually a combination of the best render speed of the slowest PC plus an estimate of the round-trip latency). Also - the command confirmation is not waiting for RT latency - you are getting confirmation when the command goes into the local buffer - 'command accepted' - once that happens it is going to execute so you get the confirm bark sound from the unit or building queue - or the movement arrow triggers. The games actually do their command simultaneously when all machines are executing the turn.
It's not about any competitive RTS: StarCraft (1 and 2) have a game design and balance that emphasizes speed and reaction time. More "contemplative" RTSes with less or slower micromanagement can get away with higher latency.
The "command latency" you feel in RTS games is more closely related to RTT than ping, so those numbers are doubled already. Besides, people's expectations of performance and low latency have changed the last 20 years.
A command latency over 30ms hurts a lot. And I don't even play competitively. 250ms was not noticed? Absolute nonsense. A quarter of a second is gigantic when it comes to latency.
This game is amazing and has aged so well. The only downside is the network pauses and out-of-sync errors that inevitably happen 40 minutes into a two-player game.
I can't remember the last time I didn't set the population limit to 75 in an effort to alleviate network issues.
[+] [-] red_admiral|7 years ago|reply
"6. We did not plan for a patch. The version 1.0a patch, even though it was a success, was problematic in that as a company we had not planned for it. THE GENERAL ARGUMENT IS THAT IF YOU KNOW YOU ARE GOING TO NEED TO RELEASE A PATCH, THEN YOU SHOULDN'T BE SHIPPING THE GAME IN THE FIRST PLACE."
Those were the days!
[+] [-] MarkTerrano|7 years ago|reply
[+] [-] SketchySeaBeast|7 years ago|reply
[+] [-] borellini|7 years ago|reply
[+] [-] lucb1e|7 years ago|reply
[+] [-] chaosbutters314|7 years ago|reply
If you own the hd version, there is a patch to freely convert it to voobly.
[+] [-] yvdriess|7 years ago|reply
[+] [-] lordnacho|7 years ago|reply
It turns out the game is synchronized by each player having the commands for each 200ms "turn" a couple of turns in advance, and then playing the actions so that the same happens on all player's machines. That includes sending random seeds around. And then there's a load of provisions for lost packets, slow machines and thus forth.
Is this how most games do it? I would think something like WoW couldn't do this, and indeed sometimes I'd see glitches where a character would blink (like the spell) to somewhere new.
[+] [-] rollcat|7 years ago|reply
Yes, you have to be more-or-less real time, so you must compensate for latency, unreliable/slow connections, jitter, etc. If you're used to the web and the request/response model, you have to throw all of this out the window. The 200ms delay "hack" is pretty much standard practice, the window will differ from game to game (smaller in FPS's), but it's usually there.
Most games use UDP, since transmission of any single package doesn't have to be reliable, and in case some packets are lost, it's cheaper to re-calculate the state diff and resend one slightly larger package instead of two (or more) standard-size packages. Sometimes this can result in a "blink".
With sending around seeds and other "secret" data, you have to make a trade-off, since sending too much allows for cheating (map hacks, wall hacks, etc), but sending too little will create unpleasant surprises (enemy "teleporting" from around the corner).
Also often it's cheaper to run most of the calculations on the client (even including the critical stuff like hit tests, damage calculation, etc), and only occasionally verify the results on the server - especially in MMO's. Clients that are found suspicious get verified more often, and eventually get penalized / kicked.
Source: never actually wrote a networked game, but love reading about this stuff.
[+] [-] MarkTerrano|7 years ago|reply
[+] [-] mtinkerhess|7 years ago|reply
[+] [-] p1necone|7 years ago|reply
[+] [-] adamson|7 years ago|reply
If you're curious, this[0] is an actively maintained implementation of a WoW server.
[0] https://github.com/mangoszero/server
[+] [-] jhj|7 years ago|reply
In our implementation, the player runs ahead on the client (client autonomous) but server verified (actions replayed on the server). The new authoritative server position for the player is sent back to the client, and the client replays whatever player movements last made since the response to a movement is heard back from the server, with the player resynchronizing to that point in time where you moved against server objects and playing forward from that point transparently; the client (and server) maintain a short queue of movement history for each moving entity. Thus, if you ran into something on the server but it didn't obstruct your movement much, you would tend to blip much less. The physics framerate was very low compared to graphics framerate, and there would be some degradation in updates received on the client by distance as throttled by the server based on area of interest management. Position updates represented most of the bandwidth of the game. Everything is UDP based with different forms of reliability options on top.
NPCs were "server authoritative" and their actions are replayed on the client. Interpenetrations are resolved via rigid body physics resolution on the client if something blips, but the server is ultimately the source of truth (nothing can interpenetrate on the server), so if a rigid body resolution on the client doesn't resolve some condition, the eventual resynchronization of player position from the server would make it happen at some point.
It worked out pretty well most of the time; certainly you can construct many scenarios where it goes awfully bad from the perspective of a client (on the server everything is always fine), but we preferred the illusion of immediate feedback/low latency versus this queuing up everything to take place N milliseconds in the future, and we didn't need exact reproducibility between clients, just eventual (and hopefully pretty quick) consistency.
[+] [-] LoSboccacc|7 years ago|reply
Many other games require higher frequency such as driving, shooting and anything that has physical simulations.
They instead use prediction and reconciliation and aim for the highest update frequency
Counter strike has servers at 120hz iirc.
Here’s some pro/cons of the approach http://www.gabrielgambetta.com/client-side-prediction-server...
edit: try the live demo too http://www.gabrielgambetta.com/client-side-prediction-live-d...
[+] [-] dilap|7 years ago|reply
http://schedule.gdconf.com/session/8-frames-in-16ms-rollback...
Similar approach, with some extra fanciness (you see the input at a fixed delay, always, and roll back if necessary to handle mispredict).
[+] [-] adamson|7 years ago|reply
Can anyone grok this? I can't see why this (each player's simulation making the same number of calls to random) would ever not be the case if all players are running the same patch of the game and are executing the same commands.
[+] [-] MarkTerrano|7 years ago|reply
[+] [-] toast0|7 years ago|reply
If you're using a global random pool, you've descyned here unless all players have the same thing on screen.
[+] [-] fl0wenol|7 years ago|reply
Once it comes time to simulate that next turn, if you have something different than other clients because of a missed update or graphics lag, even if object positions and the random seed going to be "fixed" by another turn update, all future interactions with any objects that over- or under-sampled random will be wrong and could create further sync problems.
[+] [-] monocasa|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] chaosbutters314|7 years ago|reply
if you apply the compatibility patch, you can load your steam copy onto voobly for free and play with next to no lag. The difference is night and day plus better features with Voobly.
[+] [-] executesorder66|7 years ago|reply
[+] [-] loser777|7 years ago|reply
I wonder if the rise of competitive RTS has changed this guideline. In SC2 people will start to complain if their ping to the server is more than about 130ms, and anything above 150ms starts to become painfully noticeable.
In Brood War being able to play on "LAN Latency" was a always a huge deal---to the point that unauthorized third party ladder systems enabled players to play at "LAN Latency" even when battle.net official didn't support it.
[+] [-] MarkTerrano|7 years ago|reply
[+] [-] azernik|7 years ago|reply
[+] [-] raziel2p|7 years ago|reply
[+] [-] Kenji|7 years ago|reply
[+] [-] EamonnMR|7 years ago|reply
[+] [-] JanisL|7 years ago|reply
[+] [-] sparky_|7 years ago|reply
[+] [-] nwmcsween|7 years ago|reply
[+] [-] specialist|7 years ago|reply
[+] [-] password4321|7 years ago|reply
[+] [-] wild_preference|7 years ago|reply
[+] [-] dgritsko|7 years ago|reply
[+] [-] VinzO|7 years ago|reply
[+] [-] addicted|7 years ago|reply
[+] [-] banachtarski|7 years ago|reply
[+] [-] yvdriess|7 years ago|reply
[+] [-] JanisL|7 years ago|reply
[+] [-] mrkeen|7 years ago|reply
This game is amazing and has aged so well. The only downside is the network pauses and out-of-sync errors that inevitably happen 40 minutes into a two-player game.
I can't remember the last time I didn't set the population limit to 75 in an effort to alleviate network issues.
[+] [-] cdiddy2|7 years ago|reply
[+] [-] _Codemonkeyism|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]