AntiCheat - An Analysis

Posted on Tue 15 November 2016 in AntiCheat

When talking with people regarding anticheat in games, a few common topics usually emerge. Last time I had such a discussion I was told that it could make for an interesting blogpost. This is basically my opinions of different forms of anticheat, when they work/don’t work when it’s no longer worth it and alternatives to anticheat.

Accuracy

The general benchmark for what constitutes a good anticheat is how accurate it is. How many cheaters does it catch? How many innocent players does it trigger for? Perfect accuracy isn’t achievable therefore a compromise must always be made. Depending on factors such as the game and the company developing it, the compromise can vary. Minimizing chances of false positives is the most common and accepted approach since it improves faith in the system.

Some games have used systems where any irregularities are flagged and manually verified before applying a ban. This approach, whilst very accurate, demands a massive workforce. Other games may impose a ban for every irregularity, and instead provide systems to correct false positives. However this approach can cause players to lose faith in the system and lead to clashes between the community and developers regarding legitimate bans that are claimed as false positives. In games where there is a large cheating issue, it is sometimes worth sacrificing a few false positives in order to catch more cheaters.

When it comes to dealing with false positives on an individual basis, it is important that there is be a system in place to revert them. I personally have a false-positive VAC ban on my steam account from early 2015.

Techniques

There are numerous techniques for anticheat, some of which can be employed simultaneously for improved detection.

Signature Detection / Heuristics

A form of anticheat, most famously known from Valve’s “Valve Anti-Cheat” (VAC), is signature detection. This technique involves scanning the system memory for known ‘memory signatures’. In the case of VAC, this is coupled with detection of dynamic library injection, as well as other undisclosed systems.

Signature detection can be accurate in some circumstances, though when used alone it is easily defeated. For example, simpler implementations of this technique can be bypassed merely by recompiling the cheat in question. It also requires the cheats to be known to the developers, causing a delay between a cheat being released, and it being detected. This approach is commonly used by antivirus applications and is therefore often chosen by general security researchers to combat cheaters in online games. This technique must run on the client and therefore can be potentially spoofed. The requirement to run on the client, and the need to manually find the signatures of cheats often make this impractical for smaller developers.

Server Validation

Another form of anticheat, very commonly used in games with a centralized server, is validation of player input. This involves determining if what a player has sent to the server is a valid interaction. This can be very resource-intensive for the server and is therefore generally only used for simple-to-detect cheats. This includes teleporting and damage-spoofing.

Server validation is my personal favorite and the form I am most familiar with. This technique also cannot perfectly prevent cheating, it only makes it difficult for cheats to perform non-player-like actions, limiting their usefulness. This form of player validation is known as “Pattern Detection” and uses a heuristic approach to detect cheaters based on their behaviour on the server. However, limiting false positives with this method can be incredibly difficult. For example, it can be extremely difficult to differentiate an extremely skilled player from a cheater using an aimbot. I have written a more thorough post on aimbot detection here .

Performance Review

This technique, which goes by many names, refers to having someone ‘spectate’ or monitor the player whilst they play. Whilst this can be accurate assuming the spectator is experienced, it is extremely labour-intensive. Games such as ‘Counter-Strike: Global Offensive’ use a crowd-sourcing system where matches are recorded and watched by numerous members of a committee who pass a judgement on a reported player. This prevents one inexperienced spectator from incorrectly banning a player. When false positives do occur, it is generally due to exceptionally high levels of skill from the player. Many administrators of private gameservers use this technique to determine whether a player who has been accused of cheating is actually cheating.

Punishments

The punishments for cheating are an oft-debated topic, with opinions varying from extremely harsh to extremely lenient. On the harsh side, some services permanently ban a computer for a single cheating infraction. On the lenient side, services may simply warn the player or revert actions deemed to be related to cheating. None of these are wrong; the punishment should fit in with the specific circumstances of the game.

Another aspect that must be considered is how to deal with false positives. This is more important the more severe the punishment is. I touched upon this briefly in the section on accuracy, however there is much more to it. In cases of false positives, it’s better to create a community that doesn’t believe in them, otherwise false claims of false positives can tarnish the reputation of a company. If someone claims to have been banned falsely, and does not get unbanned, anyone who believes them may lose faith in the company, or stop purchasing their games in the future for fear of being banned.

With respect to severity, there has been much research conducted in this area. It’s evident that when there’s a low severity punishment, such as a 3-warning system, people who are on the fence about cheating are more likely to cheat. In systems where punishments are very severe, the undecided are less likely to cheat. Perhaps unsurprisingly, it has been found that any ban longer than 3 years is unlikely to have an impact on the likelihood of cheating. If someone will cheat and receive a 3-year ban, they will cheat and receive a lifetime ban.

One interesting aspect of cheating punishments is a social punishment. The most well-known implementation of this is Valve’s VAC. When banned, players have a red message on their profile page, that informs other players that this person has cheated in the past. This leads to situations where the player may be kicked from online games due to fear of cheating, or banned from communities for their past actions. Studies have shown that players with bans are likely to have a smaller friends list after the ban.

Alternatives

One potential alternative to anticheat in some situations is to create the gameplay in a way that limits the impact of cheats. One example of this, is Facepunch’s game Rust. In Rust, there is a gameplay mechanic that allows players to protect their items and doors with a combination lock. Bruteforcing cheats quickly emerged which attempted codes until they found the correct one. These were patched by having the combination lock ‘zap’ the player upon an incorrect code, increasing in severity on each wrong attempt. Not only does this add to gameplay, it also prevents players using this form of cheat. Of course, not every form of cheating can be solved using this method, however it’s something to consider.

Is it worth it?

From a business perspective, creating a state-of-the-art anticheat system may not always be worth it. Minimising cheating is an important goal to have, but not if it ends up consuming a large portion of available resources. If preventing cheats cuts into time that could be spent adding features or fixing other issues in a game, it may no longer be worth it. If a game is having serious cheating problems, it may be worth rethinking the type of anticheat in use, rather than spending large amounts of time to implement small improvements to the current system. Adding features and increasing the value of the game for customers has a direct impact on the bottom line of a company.

In only very severe situations does cheating have a statistically significant impact on the actual success of a game, if there is only a minor cheating problem it’s probably not worth spending a lot of time and money on the problem.

Conclusion

Anticheat is not a ‘one size fits all’ system. It should be chosen appropriately for each game, and the specific resources the development team has available. Depending on support measures and the rate of false positives, appropriate punishments need to be decided upon as well. Anticheat is something that requires more thought than most people usually think, and is a vital part of the game. Solutions that are poorly chosen or improperly managed are more of a burden on the players than the cheaters that it is there to prevent. A well-established anticheat system can make the game more enjoyable by reinforcing the player’s confidence in the fairness of the game.