128 ticks per second servers. (And lo, suddenly the article's thesis is inherently clear.)
A "tick", or an update, is a single step forward in the game's state. UPS (as I'll call it from here) or tick rate is the frequency of those. So, 128 ticks/s == 128 updates per sec.
That's a high number. For comparison, Factorio is 60 UPS, and Minecraft is 20 UPS.
At first I imagined an FPS's state would be considerably smaller, which should support a higher tick rate. But I also forgot about fog of war & visibility (Factorio for example just trusts the clients), and needing to animate for hitbox detection. (Though I was curious if they're always animating players? I assume there'd be a big single rectangular bounding box or sphere, and only once a projectile is in that range, then animations occur. I assume they've thought of this & it just isn't in there. But then there was the note about not animating the "buy" portion, too…)
When Battlefield 4 launched it had terrible network performance. Battlefield 3 wasn't great but somehow BF4 was way worse. Turned out while clients sent updates to the server at 30 Hz, the server sent updates back only at 10 Hz[1].
This was the same as BF3, but there were also some issues with server load making things worse and high-ping compensation not working great.
After much pushback from players, including some great analysis by Battle(non)sense[2] that really got traction, the devs got the green light on improving the network code and worked a long time on that. In the end they got high-tickrate servers[3][4], up to 144Hz though I mostly played on 120Hz servers, along with a lot of other improvements.
The difference between a 120Hz server and a 30Hz was night and day for anyone who could tell the difference between the mouse and the keyboard. Problem was that by then the game was half-dead... but it was great for the 15 of us or so still playing it at that time.
Also for comparison, the Runescapes (both RS3 and Oldschool Runescape) have a 0.6 tick/second system (100 ticks/minute). It works rather well for these games, which I guess highlights that some games either a) can get away with high latencies depending on their gameplay mechanics, or b) will evolve gameplay mechanics based on the inherent limitations of their engines/these latencies. RS3 initially leaned into the 0.6s tick system (which is a remnant of its transitions from DeviousMUD to Runescape Classic to RS2) and eventually developed an ability-based combat system on top of what was previously a purely point-and-click combat system, whereas OSRS has evolved new mechanics that play into this 0.6s tick system and integrate seamlessly into the point-and-click combat system.
Having played both of these games for years (literally, years of logged-in in-game time), most FPS games with faster tick systems generally feel pretty fluid to me, to the point where I don't think I've ever noticed the tick system acting strange in an FPS beyond extreme network issues. The technical challenges that go into making this so are incredible, as outlined in TFA.
Client update was measured to be 73, not quite matching the 128 server tick and update rate. Maybe it changed in the last 5 years. CSGO private servers also ran with 128 tick rate.
> I assume there'd be a big single rectangular bounding box or sphere, and only once a projectile is in that range, then animations occur.
Now that's a fun one to think about. Hitscan attacks are just vectors right? So would there be some perf benefit to doing that initial intersection check with a less-detailed hitbox, then running the higher res animated check if the initial one reports back as "Yeah, this one could potentially intersect"? Or is the check itself expensive enough that it's faster to just run it once at full resolution?
I think for FPSes, the server relies on the client for many of the computationally intensive things, like fog of war, collision detection, line of sight and so on. This is why cheating like wall hacks are even possible in the first place: The client has the total game state locally, and knowing where to look for and extract this game state allows the cheater to know the location of every player on the map.
If the server did not reveal the location of other players to the client until the server determined that the client and opponents are within line of sight, then many types of cheating would basically be impossible.
Fallout 76, for example, lets you see where other players are facing/looking at, or where are they pointing their guns even if they don't fire. The models are animated according to the input of their users.
I don't think its ticks per second are great, because the game is known for significant lag when more than a dozen of players are in the same place shooting at things.
At any given time, ~50 of those games are going to be in the buy phase. Players will be purchasing equipment safely behind their spawn barriers and no shots can hurt them. We realized we don’t even need to do any server-side animation during the buy phase, we could just turn it off.
That explains the current trend of "online" video game that is so annoying: For 10 minutes of play, you have to wait for 10 minutes of lobby time and forced animations, like end game animations.
On BO6 it kills me, you just want to play, sometimes you don't have more than 30 minutes for a quick video game session, and with the current games, you always have to wait a very very long time. Painfully annoying.
This is not equivalent to "lobby time or end game animations" in other games.
In Valorant (similar to Counter Strike), at the start of the game you have 60 seconds to buy your weapons and abilities for the round. Valorant/CS is typically a best-of-13, and before each round is a 60 second "buy" period.
Does Fortnite have a long wait? It became a phenomenon without matchmaking rank (MMR) - after all crappy players die early so they are more frequently in the queue naturally. PUBG / Battle Royale as a format solved the problem.
Can Valorant be exactly the same, and fun, but without MMR? Hmm probably not no.
Demigod (2009), the first standalone MOBA, died for two reasons: it cost money, and it lacked MMR.
Can MMR be done quickly? IMO, no. The long waits are a symptom of how sensitive such games (BO6, CSGO, Valorant, etc.) are to even small differences in skill among the players. Versus say Hearthstone, which has MMR, but it is less impactful.
Thing is, League can be offline for 24h, and people will still come back and play it the next day. This has actually happened a few times in their history. So 10m of waiting... it sucks but people do it.
Another POV - this comment is chock full of them - is that you're just not the intended audience for the Xbox / PS5 / Steam / PC Launcher channel. It's stuff for people with time. What can I say? I mean it's a misconception that this stuff isn't inherently demographics driven - the ESA really wants people to believe, which is ridiculous, that the average "gamer" is 31 years old or whatever, but in reality, you know, the audience - I don't know what "gamer" means or which 31 year olds with kids have time for this crap that you are complaining about - is a 13 year old boy with LOTS of time. 10m to them is nothing.
Looking at Apple Arcade, which has a broader audience, there are basically no multiplayer games, and you can get started playing very quickly in any of the strategy games there, so maybe that is for you.
I vaguely remember Counter-Strike Source servers running at 33, 66, or 100 tick. My high school gaming clan was called "10tik", poking fun at the ancient Pentium box that I ran the CSS server on.
You can mess with the code all day long, but you're not getting away from raw latency.
The modern matchmaking approach groups people by skill not latency, so you get a pretty wild mix of latency.
It feels nothing like the old regional servers. Sure the skill mix was varied, but at least you got your ass handed to you in crisp <10ms by actual skill. Now it's all getting knife noscoped around a corner by a guy that rubberbanded 200ms into the next sector of the map already while insulting your mom and wearing a unicorn skin
Good thing they thought of that. Disclaimer: I was at Riot During some of the Valorant dev cycle and the stated goal in this tech blog [0] was a huge goal (keeping latency < 35ms).
This was only really doable because Riot has invested significantly in buying dark fiber and peering at major locations worldwide [1][2]
> The modern matchmaking approach groups people by skill not latency
I work at a game studio and something I have seen is that nobody is on wired anymore. You are poweruser if you are on wired. Significantly the 99% of users will be on mobile or wifi and be 10ms to first hop or two hop.
I miss playing on consistent <10ms servers in the CS 1.6 days.
The Houston/Dallas/Austin/San Antonio region was like a mini universe of highly competitive FPS action. My 2mbps roadrunner cable modem could achieve single digit ping from Houston to Dallas. Back in those days we plugged the modem directly into the gaming PC.
Counter-Strike: Global Offensive was also able to handle 128 TPS just fine. They just chose to never implement it in official matchmaking (64 TPS). It did work very smoothly on community servers.
Counter-Strike 2 implements a controversial "sub tick" system on top of 64 TPS. It is not comparable to actual 128 TPS, and often worse than standard 64 TPS in practice.
Lots of things work fine when you throw twice as many dollars at them. It’s not a matter of it working or not. It’s a matter of economics.
Most game servers are single threaded because the goal is to support the maximum number of players per dollar.
A community server doesn’t mind throwing more compute dollars to support more players or higher tick rate. When you have one million concurrent players - as CounterStrike sometimes does - the choice may be different.
> In VALORANT’s case, .5ms is a meaningful chunk of our 2.34ms budget. You could process nearly a 1/4th of a frame in that time! There’s 0% chance that any of the game server’s memory is still going to be hot in cache.
This feels like an unideal architectural choice, if this is the case!?
Sounds like each game server is independent. I wonder if anyone has more shared state multi-hosting? Warm up a service process, then fork it as needed, so there's some share i-cache? Have things like levels and hit boxes in immutable memfd, shared with each service instance, so that the d-cache can maybe share across instances?
With heartbleed et al, a context switch probably has to totally burn down the caches now a days? So maybe this wouldn't be enough to keep data hot, that you might need a multi-threaded not multi-process architecture to see shared caching wins. Obviously I dunno, but it feels like caches are shorter lived than they used to be!
I remember being super hopeful that maybe something like Google Stadia could open up some interesting game architecture wins, by trying to render multiple different clients cooperatively rather than as individual client processes. Afaik nothing like that ever emerged, but it feels like there's some cool architecture wins out there & possible.
It does sound like each server is its own process. I think you're correct that it would be a little faster if all games shared a single process. That said, then if one crashed it'd bring the rest down.
This is one of those things that might take weeks just to _test_. Personally I suspect the speedup by merging them would be pretty minor, so I think they've made the right choice just keeping them separate.
I've found context switching to be surprisingly cheap when you only have a few hundred threads. But ultimately, no way to know for sure without testing it. A lot of optimization is just vibes and hypothesize.
This post reads less like an engineering deep dive and more like a Xeon product brochure that wandered into a video game blog. They casually name-drop every Intel optimization short of tattooing "Hyperthreaded" on their foreheads.
well of course they would. they bought all intel hardware. and they are making one of the most perfromant multiplayer servers ever. they should be mentioning every optimization possible. if they had amd thread ripper servers they would mention all those features too.
I am currently doing this! Working on an MMO game server implemented in Elixir. It works AMAZING and you get so much extra observability and reliability features for FREE.
I don't know why its not more popular. Before I started the project, some people said that BeamVM would not cut it for performance. But this was not true. For many types of games, we are not doing expensive computation on each tick. Rather its just checking rules for interactions between clients and some quick AABB + visibility checks.
I distinctly remembered that Eve Online was in Erlang, went to go find sources and found out I was 100% wrong. But I did find this thread about a game called "Vendetta Online" that has Erlang... involved, though the blog post with details seems to be gone. Anyway, enjoy! http://lambda-the-ultimate.org/node/2102
You'll never get a modern FPS gameserver with good performance written in a GC language. Erlang is also pretty slow, it's Python like performance. Very far from C#, Go and Java.
The other reason is that the client and the server have to be written in the same language.
I've been told that Erlang is somewhat popular for matchmaking servers. It ran the Call of Duty matchmaking at one point. Not the actual game servers though - those are almost certainly C++ for perf reasons.
Network connection, lobby, matchmaking, leaderboards or even chats, yes. But the actual simulation, probably not for fast paced twitchy shooter.
Also not just for performance reasons, I wouldn’t call BeamVM hard realtime, but also for code. Your game server would usually be the client but headless (without rendering). Helps with reuse and architecture.
very interesting read, it seems like management/engineering/vendors were all willing to get on the same page to hit the frame budget. especially the bit about profiling every line of game code into an appropriate bucket - sounds like a lot of work which paid off handsomely.
If you just make a list of “performance tweaks” you might learn about in, say, a game dev blog post on the internet, and execute them without considering your application’s specific needs and considerations, you might hurt performance more than you help it.
Sub tick is probably more accurate overall but I do think the cs2 animation netcode is crap and hides a lot of the positives. Hopefully moving to Animgraph 2 will help that, who knows
This is from 2020. Valve wanted to be smart and invented a new "subtick" system in 2023 which isn't as good as 128 tick. To make things worse, CS is a paid game, not free like Valorant, and makes probably much more money. They seemingly just don't care enough about the problem to solve it correctly. That or there is more work to be done on subtick to make it work better than 128.
In general, Valve designs software that is incomparably better than Riot. Compare the League of Legends client to the Dota 2 game (which doesn't even have a client/game distinction), for instance - the quality gap is massive in favor of Valve.
As a lifelong Valve/CS fan, I've been so disappointed with subtick. It was pitched as generational evolution to the games netcode. Yet years later they're still playing catchup to what CS:GO provided..
Hopefully competition from Valorant and others puts more pressure to make things happen at Valve.
But animations are now lerped after each 4 frames. Do tickrate is 32 with interpolation. Not sure if sudden direction changes now might result in ghost hits. Some hardcore quake fans probably know the answer.
deathanatos|4 months ago
A "tick", or an update, is a single step forward in the game's state. UPS (as I'll call it from here) or tick rate is the frequency of those. So, 128 ticks/s == 128 updates per sec.
That's a high number. For comparison, Factorio is 60 UPS, and Minecraft is 20 UPS.
At first I imagined an FPS's state would be considerably smaller, which should support a higher tick rate. But I also forgot about fog of war & visibility (Factorio for example just trusts the clients), and needing to animate for hitbox detection. (Though I was curious if they're always animating players? I assume there'd be a big single rectangular bounding box or sphere, and only once a projectile is in that range, then animations occur. I assume they've thought of this & it just isn't in there. But then there was the note about not animating the "buy" portion, too…)
magicalhippo|4 months ago
This was the same as BF3, but there were also some issues with server load making things worse and high-ping compensation not working great.
After much pushback from players, including some great analysis by Battle(non)sense[2] that really got traction, the devs got the green light on improving the network code and worked a long time on that. In the end they got high-tickrate servers[3][4], up to 144Hz though I mostly played on 120Hz servers, along with a lot of other improvements.
The difference between a 120Hz server and a 30Hz was night and day for anyone who could tell the difference between the mouse and the keyboard. Problem was that by then the game was half-dead... but it was great for the 15 of us or so still playing it at that time.
[1]: https://www.reddit.com/r/battlefield_4/comments/1xtq4a/battl...
[2]: https://www.youtube.com/@BattleNonSense
[3]: https://www.reddit.com/r/battlefield_4/comments/35ci2r/120hz...
[4]: https://www.reddit.com/r/battlefield_4/comments/3my0re/high_...
shit_game|4 months ago
Having played both of these games for years (literally, years of logged-in in-game time), most FPS games with faster tick systems generally feel pretty fluid to me, to the point where I don't think I've ever noticed the tick system acting strange in an FPS beyond extreme network issues. The technical challenges that go into making this so are incredible, as outlined in TFA.
Hikikomori|4 months ago
https://www.youtube.com/watch?v=ftC1Rpi8mtg
calvinmorrison|4 months ago
dfltr|4 months ago
Now that's a fun one to think about. Hitscan attacks are just vectors right? So would there be some perf benefit to doing that initial intersection check with a less-detailed hitbox, then running the higher res animated check if the initial one reports back as "Yeah, this one could potentially intersect"? Or is the check itself expensive enough that it's faster to just run it once at full resolution?
Either way, this stuff is engineering catnip.
actionfromafar|4 months ago
beAbU|4 months ago
I think for FPSes, the server relies on the client for many of the computationally intensive things, like fog of war, collision detection, line of sight and so on. This is why cheating like wall hacks are even possible in the first place: The client has the total game state locally, and knowing where to look for and extract this game state allows the cheater to know the location of every player on the map.
If the server did not reveal the location of other players to the client until the server determined that the client and opponents are within line of sight, then many types of cheating would basically be impossible.
ASalazarMX|4 months ago
I don't think its ticks per second are great, because the game is known for significant lag when more than a dozen of players are in the same place shooting at things.
Fizz43|4 months ago
greatgib|4 months ago
On BO6 it kills me, you just want to play, sometimes you don't have more than 30 minutes for a quick video game session, and with the current games, you always have to wait a very very long time. Painfully annoying.
nemothekid|4 months ago
In Valorant (similar to Counter Strike), at the start of the game you have 60 seconds to buy your weapons and abilities for the round. Valorant/CS is typically a best-of-13, and before each round is a 60 second "buy" period.
doctorpangloss|4 months ago
Can Valorant be exactly the same, and fun, but without MMR? Hmm probably not no.
Demigod (2009), the first standalone MOBA, died for two reasons: it cost money, and it lacked MMR.
Can MMR be done quickly? IMO, no. The long waits are a symptom of how sensitive such games (BO6, CSGO, Valorant, etc.) are to even small differences in skill among the players. Versus say Hearthstone, which has MMR, but it is less impactful.
Thing is, League can be offline for 24h, and people will still come back and play it the next day. This has actually happened a few times in their history. So 10m of waiting... it sucks but people do it.
Another POV - this comment is chock full of them - is that you're just not the intended audience for the Xbox / PS5 / Steam / PC Launcher channel. It's stuff for people with time. What can I say? I mean it's a misconception that this stuff isn't inherently demographics driven - the ESA really wants people to believe, which is ridiculous, that the average "gamer" is 31 years old or whatever, but in reality, you know, the audience - I don't know what "gamer" means or which 31 year olds with kids have time for this crap that you are complaining about - is a 13 year old boy with LOTS of time. 10m to them is nothing.
Looking at Apple Arcade, which has a broader audience, there are basically no multiplayer games, and you can get started playing very quickly in any of the strategy games there, so maybe that is for you.
xmprt|4 months ago
ghshephard|4 months ago
| We were still running on the older Intel Xeon E5 processors, ...
| Moving to the more modern Xeon Scalable processors showed major performance gains for our server application
But - I was unable to find any mention in the article as to what processors they were actually comparing in their before/after.
aftbit|4 months ago
Havoc|4 months ago
The modern matchmaking approach groups people by skill not latency, so you get a pretty wild mix of latency.
It feels nothing like the old regional servers. Sure the skill mix was varied, but at least you got your ass handed to you in crisp <10ms by actual skill. Now it's all getting knife noscoped around a corner by a guy that rubberbanded 200ms into the next sector of the map already while insulting your mom and wearing a unicorn skin
wyldberry|4 months ago
This was only really doable because Riot has invested significantly in buying dark fiber and peering at major locations worldwide [1][2]
[0] - https://technology.riotgames.com/news/peeking-valorants-netc... [1] - https://technology.riotgames.com/news/fixing-internet-real-t... [2] - https://technology.riotgames.com/news/fixing-internet-real-t...
ROBLOX_MOMENTS|4 months ago
I work at a game studio and something I have seen is that nobody is on wired anymore. You are poweruser if you are on wired. Significantly the 99% of users will be on mobile or wifi and be 10ms to first hop or two hop.
bob1029|4 months ago
The Houston/Dallas/Austin/San Antonio region was like a mini universe of highly competitive FPS action. My 2mbps roadrunner cable modem could achieve single digit ping from Houston to Dallas. Back in those days we plugged the modem directly into the gaming PC.
uyzstvqs|4 months ago
Counter-Strike 2 implements a controversial "sub tick" system on top of 64 TPS. It is not comparable to actual 128 TPS, and often worse than standard 64 TPS in practice.
forrestthewoods|4 months ago
Most game servers are single threaded because the goal is to support the maximum number of players per dollar.
A community server doesn’t mind throwing more compute dollars to support more players or higher tick rate. When you have one million concurrent players - as CounterStrike sometimes does - the choice may be different.
unknown|4 months ago
[deleted]
jauntywundrkind|4 months ago
This feels like an unideal architectural choice, if this is the case!?
Sounds like each game server is independent. I wonder if anyone has more shared state multi-hosting? Warm up a service process, then fork it as needed, so there's some share i-cache? Have things like levels and hit boxes in immutable memfd, shared with each service instance, so that the d-cache can maybe share across instances?
With heartbleed et al, a context switch probably has to totally burn down the caches now a days? So maybe this wouldn't be enough to keep data hot, that you might need a multi-threaded not multi-process architecture to see shared caching wins. Obviously I dunno, but it feels like caches are shorter lived than they used to be!
I remember being super hopeful that maybe something like Google Stadia could open up some interesting game architecture wins, by trying to render multiple different clients cooperatively rather than as individual client processes. Afaik nothing like that ever emerged, but it feels like there's some cool architecture wins out there & possible.
Ellipsis753|4 months ago
This is one of those things that might take weeks just to _test_. Personally I suspect the speedup by merging them would be pretty minor, so I think they've made the right choice just keeping them separate.
I've found context switching to be surprisingly cheap when you only have a few hundred threads. But ultimately, no way to know for sure without testing it. A lot of optimization is just vibes and hypothesize.
syspec|4 months ago
mtoner23|4 months ago
whalesalad|4 months ago
mikhmha|4 months ago
I don't know why its not more popular. Before I started the project, some people said that BeamVM would not cut it for performance. But this was not true. For many types of games, we are not doing expensive computation on each tick. Rather its just checking rules for interactions between clients and some quick AABB + visibility checks.
andrewflnr|4 months ago
Thaxll|4 months ago
The other reason is that the client and the server have to be written in the same language.
aftbit|4 months ago
echelon|4 months ago
I was imagining some blindingly fast C or Rust on bare metal.
That UE4 code snippet is brutal on the eyes.
seivan|4 months ago
Also not just for performance reasons, I wouldn’t call BeamVM hard realtime, but also for code. Your game server would usually be the client but headless (without rendering). Helps with reuse and architecture.
Liquix|4 months ago
If you just make a list of “performance tweaks” you might learn about in, say, a game dev blog post on the internet, and execute them without considering your application’s specific needs and considerations, you might hurt performance more than you help it.
nice.
Gravityloss|4 months ago
unknown|4 months ago
[deleted]
mmmeff|4 months ago
Thev00d00|4 months ago
fngjdflmdflg|4 months ago
throw10920|4 months ago
Linkd|4 months ago
Hopefully competition from Valorant and others puts more pressure to make things happen at Valve.
unknown|4 months ago
[deleted]
holoduke|4 months ago
mmanfrin|4 months ago
ajkjk|4 months ago