From time-average replicator to best-response dynamics, and back
When a game is played over time, the players can change their actions throughout the game. The choice of action can follow different learning mechanisms leading to different types of dynamics. In this talk I shall look at the relation between the time-average of replicator dynamics (RD) and best-response dynamics (BRD), and its corresponding fictitious play (FP). It is known that the time-average of RD converges to an invariant set under BRD but not whether given an invariant set BRD there always exists a corresponding RD orbit.