I played competitive level Quake 3/Quake Live for a number of years, during the height of its popularity. I also have friends who play more modern competitive FPS games, such as CS:GO. Even though QL was only a small online scene, the use aimbots, triggerbots, wallhacks, and similar tools was always very common at the competitive level. Quake Live had very limited anti-cheat functionality, so it was never relied upon for proving legitimacy. Despite this, cheaters generally didn't last long in competitive play before being banned. Even if the computer can't unequivocally identify the presence of these tools, skilled players could usually tell the difference. Even if skilled players couldn't tell by watching alone, other contextual factors usually gave hackers away. Such as lack of LAN experience, no history of natural skill progression, etc. I'm aware that this doesn't prove that we caught every hacker, just that we caught many of them.
These cheat tools could apparently be tuned to give extremely subtle advantages to players. Still, over a long enough time period the tool would eventually do something inexplicable enough by normal beahviour to give the player away. I haven't exactly kept up with competitive FPS in years now, however it doesn't seem hard to make aimbots that work well, it seems very difficult though to make ones that consistently appear natural to a trained eye.
One may wonder what exactly is the point in a human watching this auto-playing game system, and whether they'd be better off watching a movie... But another way to look at it is as a preview of the future.
Just picture this: a world of machines battling it out for supremacy, and somewhere off to the side there is an almost forgotten human brain. It seems to serve no discernible purpose, and yet it is actually the reason all the machines exist. What they are actually fighting to optimize, unbeknownst to them, is the dopamine release into a primate brain. Bonus points if the brain itself is a simulation :)
Cheats are about two things: money or prestige. People want to get back at the players who so rudely beat them normally, and get high competitive rankings that make them look better than other players in that games community. Or, they use the cheats to create high ranking game accounts to sell to players who want to look good, or they simply cheat to win prize money.
It should go without saying that this kind of activity destroys the interest in games affected by it. People don't like playing against superhuman enemies and having their progress impeded by it.
It doesn't make sense that it would be enjoyable to play except that you can enjoy making money or making other people feel bad.
Said primates have a dopamine down if they realize their life has no meaning, so the AI must provide them some meaning (illusion) so that they get back to the desired dopamine levels
EDIT: for example by pretending they are just crappy machines and they need a human to be really good at doing things
EDIT2: oh too much snow, I can't drive, please human help me out!
yeah no. A machine does not have motivation. It does not have urges and desires. It may solve a problem perfectly buy it’s gonna be a while until they can grok why solving problems is worthwhile to begin with.
This is just one example of the "analogue hole" [1] problem shared by all anti-cheat/DRM systems.
At least in theory, there is no technology that can prevent exploits like this short of dystopian levels of surveillance and locking down computing devices even further.
By that I mean encrypted communication on all computer buses (including USB, HDMI), and only allowing access to those busses via physically hardened "secure" enclaves, up to (in the end game) big-brother-like surveillance (think electronic proctoring solutions).
I think that this is exactly the problem with such DRM schemes---the ensuing cat-and-mouse game will inevitably lead to trampling the user's freedoms, because locking down computing devices and environments to ridiculous levels is the only way in which DRM can be made to work.
Of course, for now, cheats like the one featured in the article should be fairly easy to detect (at least from what I've seen in the linked video).
The motion of the bot is extremely jerky; a simple rule-based system, or, if you want to be fancy, a neural network based anomaly detection system should be able to detect this.
On the side of the cheat authors, this could be easily circumvented if they include a "calibration phase", where user input trains a simple neural network to stochastically emulate the dynamics of the user's sensor-action loop. The cheat could then act slightly faster than the user, giving them an edge while still using their unique dynamics profile.
I wonder where this will lead eventually, and I genuinely feel sorry for all the people who pour their heart and soul into competitive gaming; I don't think that this kind of cheating is something that can and should (see above) be prevented in the long-run.
The best possible outcome I can imagine is that online gaming becomes more cooperative or once more converges back to small groups of people who know and trust each other.
The solution is really simple - make all competitive gaming events LANs with standardized hardware that is not touched by players before the event starts.
For regular online gaming, you can train a neural net to detect cheats like this, biased by the players score. If the cheat is introducing enough error for the player to be killable, its not ruining the experience for the rest of the players.
>By that I mean encrypted communication on all computer buses (including USB, HDMI)
That only delays things since in the end you still need a human being to be able to play. So you can have a camera looking at the screen and a mouse/keyboard with some wires soldered to the key points.
>will inevitably lead to trampling the user's freedoms
People keep saying this but it happened 20 years ago. This reminds me of shit like the postal service requiring photo ID to receive a package and people complaining about NSA hundreds of years later. Now you need a phone to play a game and some of the most popular need literal photo ID checks. Imagine, sending your photo ID which if stolen people can steal your money, to a bunch of newgrads running a game studio. This is what people (kids and manchildren) accept to address the overstated problem of game cheating. I played thousands of hours of games for 20 years and the number of cheaters I ran into is around 10 or 20. Most players of games (including the ones who complain about "cheaters") do not even have a clue what a game cheat is. They think some guy has some cheat that only works in this weird scenario that happens 1/100 games. Yeah, can you guys stop making me need photo ID for to play some stupid game? This is no different than every obnoxious statist concern that gets addressed by some charlatan who purports to be saving the world by ruining everyone's day (almost any time I install or configure a game my day is ruined, imagine a typical dependency hell but 10x worse). And no, I haven't ran into little cheaters because of "sophisticated anticheat" (stuff like punkbuster is extremely incompetent), it's because public hacks simply get blacklisted once they become big enough to matter.
On a way more simple level, back in my days I wrote bots for flash/browser games basically by detecting specific pixels on the screen and acting according to it. Sounds stupid, but with simple games this could work very well.
I never got 'detected' how would they? And some of my bots easily got more skills than I ever had
I did something like this with Dota2 where I'd have it monitor the opposing team to save their locations when they appeared on the minimap in a shaded color on a 2nd monitor on that 2nd computer. I used an HDMI splitter and HDMI capture card to send the input to the second computer. I did the CV stuff with opencv library. Taking it a step further as shown in the article could be done to do stuff like auto creep killing vs human and stuff like that.
Cheating in the game "Warzone" has been a really interesting thing to watch unfold. It makes the game really frustrating to play and I have stopped playing it, but outside of that it's interesting to watch players, even high-visibility streamers, attempt to play without getting caught and succeeding at that.
In Warzone the map is huge, and 150 players are split up in to competing squads of 1-4 players. So it becomes very much an information heavy game. Knowing where people are, even when they're hundreds of meters away, is extremely valuable, and that's one of the hacks people have. They see nametags, hitboxes and information for players overlaid on their screen. They also change colour based on if they're in line-of-sight, so you can say "I see that squad, but they can't shoot at me, so I am safe" and that's extremely valuable.
Second to that, they can use aim-assist but the highest-end aimbots are really quite nuanced, and they allow you to smooth out the assisted movements and only auto-aim in specific scenarios, and that makes it really hard to tell they're using an aimbot. It just looks like a really good player. Gone are the "snap to head" aimbots of Counter Strike that were so obvious.
You can report someone all-day-long and they may never get banned because the hack software is competent, and it's hard to demark the difference between "Knowledge you shouldn't be able to have" and "Intuition of a good player".
I did this for the FPS game Valorant, not for cheating, but to see if I could make a 100% computer-controlled opponent.[1]
It involved some weird Logitech mouse hacking for the control side of things and was overall really rough, especially with how much latency was involved.
With a normal USB capture device, the latency is around 50-100ms, so it’s hard to do lightning-fast reaction. Even more so if your view is moving at the same time. Everything that normal aimbots do by directly accessing the game’s memory suddenly becomes much harder when you don’t have memory access.
For anyone interested though, the computer vision models publicly available are both very good and also easy to fine-tune for a specific use case (e.g. a specific game).
I remember over a decade ago doing something far less sophisticated to "cheat" in a turn-based projectile physics game --- I used a https://en.wikipedia.org/wiki/Nomogram printed on transparency and just put it on the monitor to quickly calculate parameters between turns.
At a LAN party a few years ago one of the games being played was Modern Warfare 2, a game where your gun is almost always going to hit in exactly the centre of the screen, but that’s made more difficult by requiring you to look down the sights to get an aim point, which slows you down.
One of the people near me worked around that by taking a few shots down the sights and then sticking a blob of blue tack on his monitor where the shots hit, creating an instant targeting reticle available at all times.
Game developers should stop trying to police this and let the bots fight each other. The game becomes, who can tune their bots better. Train the bots for team work or more strategic level play.
In the meantime, the rest of us can just enjoy some co-op gaming against some easy bots, or play competitive matches against just friends and family.
If some random person on your server is no fun to play again, just kick them.
If you are really into the competitive gaming and want to take the games more seriously, I'd like to see players go meet other players face to face. Join a local league and play on hardware you don't own with other people watching. I used to love playing Quake down at the local internet cafe with a bunch of friends. We had a small 4v4 league going.
Funny, recently there were threads about anti-cheat rootkits where people argued they're justified because they provide some defense against cheaters who wouldn't be sophisticated enough to exploit the analog hole.
I wonder if the anti-cheat system could insert ghost enemies for a few frames, or about 20-30ms, fast enough that humans wouldn't really notice. If players are shooting too often at ghosts, that may be a good reason to ban.
Aimbots already exist in a practical sense as autonomous drones. Why go through the trouble of creating an aimbot system for human use when you can just replace the human entirely?
This is close to my ideal way of avoiding a runaway Ultron/Skynet-type AI scenario: airgap it and force the use of a physical mouse, monitor, and keyboard to interact with the outside world. At least once it reaches Twitter we can stop it before it destroys humanity one robotic keystroke at a time.
I was thinking about this the other day when I saw a YouTube Warzone stream lament that the game didn't have an anti-cheat system on the end users machine.
I do wonder about the claim that it is undetectable though since a machine will be flawless where as even the best human players make mistakes.
I think it's currently impossible to detect. I think it would be possible to detect if you used some kind of player profile where you characterized how a player moved and then were able to determine through a replay whether the profiles matched. However then you could program the cheat AI to mimic the real player etc, etc. But this would take several evolutions where each tech would be ahead of the other.
I stopped playing war zone due to the amount of cheaters. Most of the streamers cheat too and it’s sad. It can be a lot of fun to play but it’s like counter strike back in the day when it got over run by cheaters until valve started auto detecting them.
There’s videos of a popular streamer called symphony or something where he’s shooting people through walls or sniping people when zoomed in that aren’t even visible. Then jokes “oh someone will call me a cheater”
Valve has been using deep learning detection to analyze replays for years. Based on behaviors like reaction times and shooting precision they can flag suspicious cases. They are completely wrong thinking it is undetectable.
Does anyone know why Intel FOGS did not get popular? https://web.archive.org/web/20070707090927/http://www.techno...
Hardware signing input device inputs would advance the cat and mouse game by a lot. Then you'd need something physically moving the input device for you to aimbot. Or pay someone to properly crack open the secure enclave on your input device.
Just make a "usb mouse" which is actually a programmable arduino? What makes an aimbot possible is that something can read the game's state and turn it into a mousemove signal. It can then feed it into the arduino instead of directly moving the mouse.
The problem is preventing game state from being read. Part of this can be done by detecting DLL injections.
Sadly you will always be able to write a monitor that runs as root. After all, the kernel literally administrates the computer's memory. Of course it has read, and write, access. So in the worst case, cheaters will use custom "memory drivers".
This is so interesting. I'm an avid CSGO player for decades, and watching the cat and mouse battle between cheat creators and Valve has been fascinating. In the last few years, Valve has been pushing for ML-centric approach of cheat detection (vs more intrusive client-side solutions that competitors like Valorant have). And the community feels negatively about that because it hasn't been so effective in the short term. But now after reading this, I understand why. The only way to combat ML-cheats, is with ML detection..
I was competitive (not professional, but I played with, against, and around pros) in FPS in the early days of competitive console shooters. Cheating was so rampant that people over a certain ranking were automatically considered cheaters. I would say the top 5 percent of ranked players were cheaters in some games and it was not subtle. You knew immediately who was cheating, but there usually wasn’t much you could do because there wasn’t usually any kind of reporting function or if it exsisted it was limited and not very effective.
> Those display frames are then run through a computer vision-based object detection algorithm like You Only Look Once (YOLO) that has been trained to find human-shaped enemies in the image (or at least in a small central portion of the image near the targeting reticle).
I wonder how well would these pre-trained networks work in a real-life automated turret.
[+] [-] ajxs|4 years ago|reply
These cheat tools could apparently be tuned to give extremely subtle advantages to players. Still, over a long enough time period the tool would eventually do something inexplicable enough by normal beahviour to give the player away. I haven't exactly kept up with competitive FPS in years now, however it doesn't seem hard to make aimbots that work well, it seems very difficult though to make ones that consistently appear natural to a trained eye.
[+] [-] _l4lu|4 years ago|reply
Just picture this: a world of machines battling it out for supremacy, and somewhere off to the side there is an almost forgotten human brain. It seems to serve no discernible purpose, and yet it is actually the reason all the machines exist. What they are actually fighting to optimize, unbeknownst to them, is the dopamine release into a primate brain. Bonus points if the brain itself is a simulation :)
[+] [-] smolder|4 years ago|reply
It should go without saying that this kind of activity destroys the interest in games affected by it. People don't like playing against superhuman enemies and having their progress impeded by it.
It doesn't make sense that it would be enjoyable to play except that you can enjoy making money or making other people feel bad.
[+] [-] mwcremer|4 years ago|reply
[+] [-] ithkuil|4 years ago|reply
EDIT: for example by pretending they are just crappy machines and they need a human to be really good at doing things
EDIT2: oh too much snow, I can't drive, please human help me out!
[+] [-] 0-_-0|4 years ago|reply
[+] [-] rantwasp|4 years ago|reply
[+] [-] aurelian15|4 years ago|reply
Of course, for now, cheats like the one featured in the article should be fairly easy to detect (at least from what I've seen in the linked video). The motion of the bot is extremely jerky; a simple rule-based system, or, if you want to be fancy, a neural network based anomaly detection system should be able to detect this.
On the side of the cheat authors, this could be easily circumvented if they include a "calibration phase", where user input trains a simple neural network to stochastically emulate the dynamics of the user's sensor-action loop. The cheat could then act slightly faster than the user, giving them an edge while still using their unique dynamics profile.
I wonder where this will lead eventually, and I genuinely feel sorry for all the people who pour their heart and soul into competitive gaming; I don't think that this kind of cheating is something that can and should (see above) be prevented in the long-run. The best possible outcome I can imagine is that online gaming becomes more cooperative or once more converges back to small groups of people who know and trust each other.
[1] https://en.wikipedia.org/wiki/Analog_hole
Edit: Spelling, grammar, and clarity
[+] [-] ActorNightly|4 years ago|reply
For regular online gaming, you can train a neural net to detect cheats like this, biased by the players score. If the cheat is introducing enough error for the player to be killable, its not ruining the experience for the rest of the players.
[+] [-] marcinzm|4 years ago|reply
That only delays things since in the end you still need a human being to be able to play. So you can have a camera looking at the screen and a mouse/keyboard with some wires soldered to the key points.
[+] [-] floatboth|4 years ago|reply
Valve has already been doing that for a few years: https://www.pcgamer.com/vacnet-csgo/
[+] [-] voidnullnil|4 years ago|reply
People keep saying this but it happened 20 years ago. This reminds me of shit like the postal service requiring photo ID to receive a package and people complaining about NSA hundreds of years later. Now you need a phone to play a game and some of the most popular need literal photo ID checks. Imagine, sending your photo ID which if stolen people can steal your money, to a bunch of newgrads running a game studio. This is what people (kids and manchildren) accept to address the overstated problem of game cheating. I played thousands of hours of games for 20 years and the number of cheaters I ran into is around 10 or 20. Most players of games (including the ones who complain about "cheaters") do not even have a clue what a game cheat is. They think some guy has some cheat that only works in this weird scenario that happens 1/100 games. Yeah, can you guys stop making me need photo ID for to play some stupid game? This is no different than every obnoxious statist concern that gets addressed by some charlatan who purports to be saving the world by ruining everyone's day (almost any time I install or configure a game my day is ruined, imagine a typical dependency hell but 10x worse). And no, I haven't ran into little cheaters because of "sophisticated anticheat" (stuff like punkbuster is extremely incompetent), it's because public hacks simply get blacklisted once they become big enough to matter.
[+] [-] herbst|4 years ago|reply
I never got 'detected' how would they? And some of my bots easily got more skills than I ever had
[+] [-] rightbyte|4 years ago|reply
[+] [-] denimnerd42|4 years ago|reply
[+] [-] ed25519FUUU|4 years ago|reply
[+] [-] ehnto|4 years ago|reply
In Warzone the map is huge, and 150 players are split up in to competing squads of 1-4 players. So it becomes very much an information heavy game. Knowing where people are, even when they're hundreds of meters away, is extremely valuable, and that's one of the hacks people have. They see nametags, hitboxes and information for players overlaid on their screen. They also change colour based on if they're in line-of-sight, so you can say "I see that squad, but they can't shoot at me, so I am safe" and that's extremely valuable.
Second to that, they can use aim-assist but the highest-end aimbots are really quite nuanced, and they allow you to smooth out the assisted movements and only auto-aim in specific scenarios, and that makes it really hard to tell they're using an aimbot. It just looks like a really good player. Gone are the "snap to head" aimbots of Counter Strike that were so obvious.
You can report someone all-day-long and they may never get banned because the hack software is competent, and it's hard to demark the difference between "Knowledge you shouldn't be able to have" and "Intuition of a good player".
[+] [-] voidnullnil|4 years ago|reply
[+] [-] riveducha|4 years ago|reply
It involved some weird Logitech mouse hacking for the control side of things and was overall really rough, especially with how much latency was involved.
With a normal USB capture device, the latency is around 50-100ms, so it’s hard to do lightning-fast reaction. Even more so if your view is moving at the same time. Everything that normal aimbots do by directly accessing the game’s memory suddenly becomes much harder when you don’t have memory access.
For anyone interested though, the computer vision models publicly available are both very good and also easy to fine-tune for a specific use case (e.g. a specific game).
[1] https://riveducha.onfabrica.com/valorant-ai-pytorch-opencv-l...
[+] [-] userbinator|4 years ago|reply
I remember over a decade ago doing something far less sophisticated to "cheat" in a turn-based projectile physics game --- I used a https://en.wikipedia.org/wiki/Nomogram printed on transparency and just put it on the monitor to quickly calculate parameters between turns.
[+] [-] jon-wood|4 years ago|reply
One of the people near me worked around that by taking a few shots down the sights and then sticking a blob of blue tack on his monitor where the shots hit, creating an instant targeting reticle available at all times.
[+] [-] jay_kyburz|4 years ago|reply
Game developers should stop trying to police this and let the bots fight each other. The game becomes, who can tune their bots better. Train the bots for team work or more strategic level play.
In the meantime, the rest of us can just enjoy some co-op gaming against some easy bots, or play competitive matches against just friends and family.
If some random person on your server is no fun to play again, just kick them.
If you are really into the competitive gaming and want to take the games more seriously, I'd like to see players go meet other players face to face. Join a local league and play on hardware you don't own with other people watching. I used to love playing Quake down at the local internet cafe with a bunch of friends. We had a small 4v4 league going.
[+] [-] the8472|4 years ago|reply
https://news.ycombinator.com/item?id=27640553
[+] [-] ALittleLight|4 years ago|reply
[+] [-] mondo3343|4 years ago|reply
[+] [-] narrator|4 years ago|reply
[+] [-] ramphastidae|4 years ago|reply
[+] [-] l33t2328|4 years ago|reply
[+] [-] 0xfaded|4 years ago|reply
[+] [-] okwubodu|4 years ago|reply
[+] [-] coolspot|4 years ago|reply
It will type it’s own source code (but better) into a hacked TPU/GPU/whatever cluster using a keyboard and vim, no problem.
[+] [-] jtsiskin|4 years ago|reply
[+] [-] pmorici|4 years ago|reply
I do wonder about the claim that it is undetectable though since a machine will be flawless where as even the best human players make mistakes.
[+] [-] denimnerd42|4 years ago|reply
[+] [-] philliphaydon|4 years ago|reply
There’s videos of a popular streamer called symphony or something where he’s shooting people through walls or sniping people when zoomed in that aren’t even visible. Then jokes “oh someone will call me a cheater”
[+] [-] sudosysgen|4 years ago|reply
[+] [-] detaro|4 years ago|reply
[+] [-] mmacvicarprett|4 years ago|reply
[+] [-] MacroChip|4 years ago|reply
[+] [-] bruce343434|4 years ago|reply
The problem is preventing game state from being read. Part of this can be done by detecting DLL injections.
Sadly you will always be able to write a monitor that runs as root. After all, the kernel literally administrates the computer's memory. Of course it has read, and write, access. So in the worst case, cheaters will use custom "memory drivers".
[+] [-] Linkd|4 years ago|reply
[+] [-] etempleton|4 years ago|reply
[+] [-] skocznymroczny|4 years ago|reply
I wonder how well would these pre-trained networks work in a real-life automated turret.