# Quantal response equilibrium

From Wikipedia the free encyclopedia

Quantal response equilibrium
A solution concept in game theory
Relationship
Superset ofNash equilibrium, Logit equilibrium
Significance
Proposed byRichard McKelvey and Thomas Palfrey
Used forNon-cooperative games
ExampleTraveler's dilemma

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey,[1][2] it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

In a quantal response equilibrium, players are assumed to make errors in choosing which pure strategy to play. The probability of any particular strategy being chosen is positively related to the payoff from that strategy. In other words, very costly errors are unlikely.

The equilibrium arises from the realization of beliefs. A player's payoffs are computed based on beliefs about other players' probability distribution over strategies. In equilibrium, a player's beliefs are correct.

## Application to data

When analyzing data from the play of actual games, particularly from laboratory experiments, particularly from experiments with the matching pennies game, Nash equilibrium can be unforgiving. Any non-equilibrium move can appear equally "wrong", but realistically should not be used to reject a theory. QRE allows every strategy to be played with non-zero probability, and so any data is possible (though not necessarily reasonable).

## Logit equilibrium

The most common specification for QRE is logit equilibrium (LQRE). In a logit equilibrium, player's strategies are chosen according to the probability distribution:

${\displaystyle P_{ij}={\frac {\exp(\lambda EU_{ij}(P_{-i}))}{\sum _{k}{\exp(\lambda EU_{ik}(P_{-i}))}}}}$

${\displaystyle P_{ij}}$ is the probability of player ${\displaystyle i}$ choosing strategy ${\displaystyle j}$. ${\displaystyle EU_{ij}(P_{-i}))}$ is the expected utility to player ${\displaystyle i}$ of choosing strategy ${\displaystyle j}$ under the belief that other players are playing according to the probability distribution ${\displaystyle P_{-i}}$. Note that the "belief" density in the expected payoff on the right side must match the choice density on the left side. Thus computing expectations of observable quantities such as payoff, demand, output, etc., requires finding fixed points as in mean field theory.[3]

## For dynamic games

For dynamic (extensive form) games, McKelvey and Palfrey defined agent quantal response equilibrium (AQRE). AQRE is somewhat analogous to subgame perfection. In an AQRE, each player plays with some error as in QRE. At a given decision node, the player determines the expected payoff of each action by treating their future self as an independent player with a known probability distribution over actions. As in QRE, in an AQRE every strategy is used with nonzero probability.

## Applications

The quantal response equilibrium approach has been applied in various settings. For example, Goeree et al. (2002) study overbidding in private-value auctions,[4] Yi (2005) explores behavior in ultimatum games,[5] Hoppe and Schmitz (2013) study the role of social preferences in principal-agent problems,[6] and Kawagoe et al. (2018) investigate step-level public goods games with binary decisions.[7]

## Critiques

### Non-falsifiability

Work by Haile et al. has shown that QRE is not falsifiable in any normal form game, even with significant a priori restrictions on payoff perturbations.[8] The authors argue that the LQRE concept can sometimes restrict the set of possible outcomes from a game, but may be insufficient to provide a powerful test of behavior without a priori restrictions on payoff perturbations.

However the authors say "this should not be mistaken for a critique of the QRE notion itself. Rather, our aim has been to clarify some limitations of examining behavior one game at a time and to develop approaches for more informative evaluation of QRE." This "non-falsifiability" is a result of showing multiple probability distributions for player strategies may be consistent with expected values from QRE, and that more conditions, such as requiring identically distributed and independent perturbations, are needed to guarantee a unique probability distribution for individual behavior such as a logit distribution. This is essentially the same as the refinement problem when multiple Nash equilibria occur.

### Loss of Information

As in statistical mechanics the mean-field approach, specifically the expectation in the exponent, results in a loss of information.[9] More generally, differences in an agent's payoff with respect to their strategy variable result in a loss of information.