Tournament One
Date | 2017-09-08 |
Location | Erasmus University Rotterdam |
Participants | 5 |
Game | Game One |
Our first tournament took place at Erasmus University Rotterdam. The tournament was small - 5 participants - but no less enjoyable for that. Vladmir Karamychev won the tournament and was consequently awarded a token gift. The complete ranking was as follows:
Rank | Participant | University | Final Score |
---|---|---|---|
1 | Vladimir Karamychev | Erasmus University Rotterdam | 140.6 |
2 | Clemens Fiedler | Tilburg University | 122.3 |
3 | Gijsbert Zwart | University of Groningen | 120.3 |
4 | Bart Voogt | CPB Netherlands Bureau for Economic Policy Analysis | 116.2 |
5 | Andrei Dubovik | CPB Netherlands Bureau for Economic Policy Analysis | 108.7 |
All the code is publicly available and can accessed via our git server, see Technical Organization for instructions. (If you run the code yourself, you should obtain exactly the same results on Linux; on Windows the results are a bit different, we will address this issue in the future.)
What follows are some descriptive statistics and considerations.
Evolutionary Dynamics
How well did the players perform against one another? Let denote the payoff matrix, where gives the payoff of player when he is playing against . The payoff matrix for this tournament was as follows (green marks a win, salmon marks a loss):
VK | CF | GZ | BV | AD | Total | |
---|---|---|---|---|---|---|
VK | 32.2 | 56.1 | 27.2 | 25.1 | 140.6 | |
CF | 27.1 | 46.6 | 23.7 | 25.0 | 122.3 | |
GZ | 32.2 | 33.6 | 27.2 | 27.3 | 120.3 | |
BV | 32.0 | 22.4 | 36.6 | 25.2 | 116.2 | |
AD | 23.4 | 23.1 | 37.6 | 24.6 | 108.7 |
The first observation is that there was little collusion (cooperation) in the tournament. If two players were to collude perfectly, their joint profits should have been 250 after 1000 rounds. Whereas the maximum joint profits, obtained by Gijsbert and Vladimir, were only 88.3.
Further, no single strategy was strictly dominant. On the other hand, Gijsbert's strategy was strictly dominated, albeit it performed well overall. If firms commit to these strategies, will all types of firms survive? Suppose there are many firms and initially each strategy is adopted by one-fifth of all firms. We can look at the evolutionary dynamics of this system using the standard replicator equation.
As can be seen from the figure, all strategies survive in the long-run. Note that in equilibrium expected payoffs from each strategy are equal. This evolutionary games perspective gives a simple reason why Nash equilibrium might be a poor predictor for real life data. Namely, many different strategies, personified for example by marketing managers or data scientists, can survive in equilibrium.
Testing for Nash Equilibrium
The following figure compares the empirical distribution of all the prices observed in the tournament with the theoretical Nash distribution.
The Kolmogorov-Smirnov test rejects the null hypothesis that these distributions are the same at 1% confidence level (the p-value is indistinguishable from 0). It should be noted, however, that the tournament is different from the Levitan and Shubik model on which it is based. Firstly, there are multiple periods in the tournament. Secondly, the objective function is to win as opposed to maximize profits. (It seems to be an open question, how much of a difference the objective function introduces given that the round-robin style of the tournament seems to still favour profit maximization. Indeed, Gijsbert lost in every individual game but came in third overall.)
Best Static Response
The analysis of the previous two sections could be performed for any lab experiment. However, in our tournament we collect not only the actions, but also the strategies. Consequently, additional types of analysis become possible. This section gives one such example.
Andrei and Bart have played static strategies: prices were drawn from a fixed distribution irrespective of the past actions of the opponents. The rest of the participants played history-dependent strategies. For example, the strategy of Vladimir attempted to learn the strategy of the other player as well as possible (using conditional kernel densities), and then played the best response. Clearly, Andrei and Bart have lost, but just how far can one get in this tournament with a static strategy? To study this question, let us replace Andrei's strategy with an arbitrary Beta distribution, and then optimize over the parameters of that distribution so as to get as high in the tournament's ranking as possible. (The Beta distribution has been chosen as it is relatively flexible.) The following figure compares the old and the new rankings.
So, with the benefit of hindsight a static distribution can perform well, but still not well enough to win the tournament. This result is somewhat expected: the tournament has 1000 periods, which is long enough to learn the distribution of the opponent and respond optimally. Therefore the strategy of Vladimir, which is doing precisely that, does well. We have discussed this issue that a long tournament favours fast learning strategies as opposed to strategies with good priors. In the future, we might hold tournaments with different prizes for different lengths, or maybe make the length of the tournament uncertain with a high chance of it being short.