# Mixin' It Up

Every game has at least one Nash Equilibrium, a pair of strategies that are best responses to each other. Some times that equilibrium is less than good, other times it is hard to obtain, but there is always one there.

But what about a classic game like Rock-Paper-Scissors? Or it’s modern descendants like Yomi, BattleCON, Grimslingers, or Lagoon? These games are defined by a non-transitive relation between strategies: Rock beats Scissors beats Paper beats Rock …

There are no pairs of matching best responses in Rock-Paper-Scissors: the best response to Rock is Paper, and the best response to Paper is Scissors, and so on. This means that there cannot be a Nash Equilibrium… in pure strategies.

What’s the best thing you can do in Rock-Paper-Scissors? Randomize your selection equally between all three options.

(As an aside, humans are terrible at truly randomizing, and this can be exploited by a robot that will beat you at RPS)

There’s a concept in Game Theory that matches this: a **Mixed Strategy** specifies a probability of playing each of the various pure strategies that are available to a player (each receiving a probability between 0 and 1 inclusive, and the sum of all such probabilities totaling 1).

The best strategy that you can play in Rock-Paper-Scissors is an equal mix, playing each choice with a probability 1/3. Playing this strategy means that you cannot get exploited: there is nothing that your opponent can do to increase their chances of winning. If your probability mix was a bit off, say 1/2 Rock, 1/4 Paper, 1/4 Scissors, then your opponent would be better off playing all Paper.

If you don’t want to get exploited, you play the equal mix; your opponent thinks the same, and they play the equal mix. The result is that the two strategies are best responses to each other, a **Mixed Strategy Nash Equilibrium**.

The Mixed Strategy Nash Equilibrium for a game need not be an equal mix: that’s just a feature of Rock-Paper-Scissors. In a 2p fighting game like BattleCON, for example, each move may deal a different amount of damage, or have a different chance of success, or give a different special ability. The Mixed Strategy Nash Equilibrium for this exchange would be an unequal distribution, most likely, with even some moves getting a probability of zero.

Paul Owen, in his discussion of Game Theory, gives an example from Tennis, where the server has to decide which section of the court to serve to, and the defender has to decide which section of the court to defend. This is similar to a number of simultaneous action selection showdowns like the recent robot battle game Critical Mass.

Paul says that his Tennis example has no equilibrium at all, which is true if you’re speaking only of pure strategy equilibrium. However, there is a Mixed Strategy Nash Equilibrium:

The server will shoot C 30% of the time, and D 70% of the time, while the defender will defend C 40% of the time, and D 60% of the time. The server can do no better than this, and ends up with a 62% chance of scoring a point.

There are a number of ways to interpret what these sorts of mixed strategies mean, each of which could connect with other game mechanisms in interesting ways.

The probability mix could be intentional randomization on behalf of the players; e.g. choosing strategy C with probability 30% and strategy D with probability 70%. If this is so, the game should include a randomization mechanism to ensure this is truly random (and not the exploitable whims of human psychology) - throw a D20 in the box if need be. Handing over the decision making power to a randomizer can help prevent players from being exploited

These numbers could also be the relative frequencies of how often each strategy has been played in the past; e.g. the player has played C 30% of the time, and D 70% of the time. This track record could be used to inform what they are going to do in the future. To make this happen, you’ll need there to be frequent, repeated, identical interactions, whether in the same game or across different games, and some way to track them. Adding this sort of tracker to a Legacy game could help players reach a Mixed Strategy Nash Equilibrium, for example

Similarly, this probability mix could represent the state of the

**Metagame**, describing the mix of different strategies in a larger population of players (as is done in**Evolutionary Game Theory**). In a Magic: The Gathering tournament, for example, the field usually breaks down into a mix of a handful of different archetypes (notwithstanding those rogue designers trying to break the metagame). Knowing approximately what the metagame consists of will allow strategic players to make the optimal deck choice for an event and, over time, the environment will stabilize to a consistent metagame population (which is why it constantly needs to be refreshed with set rotation)

The takeaway from this is that every game has a Nash Equilibrium, it just may not be as obvious as a Pure Strategy Nash Equilibrium. It may emerge over time, slowly shaping player behaviour. Players don't need to be game theorists to figure out the Equilibrium; they will arrive at it naturally over the course of repeated plays. The intrinsic feedback mechanism of a game — winning or losing — will shape their strategies. A strategy that wins more often than others will soon see its counterpart emerge in the wild, and the pendulum between these (and any other strategies) will gradually settle into a stable Equilibrium.