Extensive form game:

The normal form gives the mathematician an easy notation for the study of equilibria problems, because it bypasses the question of how strategies are calculated, i.e. how the game is actually played. The convenient notation for dealing with these questions, more relevant to combinatorial game theory, is the extensive form of the game. This is given by a tree, where at each vertex of the tree a different player has the choice of choosing an edge.

Simple game:

The normal form and the extensive form capture the essence of non-cooperative games. But in some games the formation of coalitions and the way cooperation is developed are more important. For dealing with questions of cooperation, the notion of a simple game was developed.

Types of games and examples:

Game theory classifies games into many categories that determine which particular methods one can apply to solving them (and indeed how one defines "solved" for a particular category). Common categories include:

Zero-sum and non-zero-sum games:

In zero-sum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (or more informally put, a player benefits only at the expense of others). Go, chess and poker exemplify zero-sum games, because one wins exactly the amount one's opponents lose. Most real-world examples in business and politics, as well as the famous prisoner's dilemma are non-zero-sum games, because some outcomes have net results greater or less than zero. Informally, a gain by one player does not necessarily correspond with a loss by another. For example, a business contract ideally involves a positive-sum outcome, where each side ends up better off than if they did not make the deal.

Note that one can more easily analyse a zero-sum game; and it turns out that one can transform any game into a zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.

A game's payoff matrix is a convenient way of representation. Consider for example the two-player zero-sum game with the following matrix:

Player 2

Player 1/ Player 2

Action A

Action B

Action C

Player 1

Action 1

30

-10

20

Action 2

10

20

-20

The conditions of victory are as follows: each round, each player's points total will be affected by "the payoff", the number in one of the fields in table A. Positive payoff is good for the first player's total points, and bad for the second player's total points. Negative payoff is bad for the first player's total points, but is good for the second player's total points.

The order of play proceeds as follows: The first player chooses in secret one of the two actions 1 or 2; the second player, unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.

Example: the first player chooses action 2 and the second player chose action B. When the payoff is allocated the first player gains 20 points and the second player loses 20 points. Now, in this example game both players know the payoff matrix and attempt to maximize the number of their points. What should they do?

Player 1 could reason as follows: "with action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, player 2 would choose action C. If both players take these actions, the first player will win 20 points. But what happens if player 2 anticipates the first player's reasoning and choice of action 1, and deviously goes for action B, so as to win 10 points? Or if the first player in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?

John von Neumann had the fundamental and surprising insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimise the maximum expected point-loss independent of the opponent's strategy; this leads to a linear programming problem with a unique solution for each player. This minimax method can compute provably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that the first player should choose action 1 with probability 57% and action 2 with 43%, while the second player should assign the probabilities 0%, 57% and 43% to the three actions A, B and C. Player one will then win 2.85 points on average per game.

Next Page »