In this lecture, I will present a general theory for mean-field games formulated in discrete time, and with discounted infinite-horizon cost. I will cover both perfect local state and decentralized imperfect state information structures. The state space of each player is a locally compact Polish space, and at each time, the players are coupled through the empirical distribution of their states, which affects both the players’ individual costs as well as their state transition probabilities. I will first discuss the difficulties to be encountered in any attempt to obtain the exact Nash equilibrium (even if it exists) in such dynamic games with decentralized information, with a finite number of players. The mean-field approach offers a way out of this difficulty, which is the topic of this lecture.
First focusing on the perfect local state information, and using the solution concept of Markov-Nash equilibrium (under which a policy is player-by-player optimal in the class of all Markov policies), I will show under some (precise) mild conditions the existence of a mean-field equilibrium in the infinite population limit. I will then show that the policy obtained from the mean-field equilibrium is approximately Markov-Nash when the number of players is sufficiently large. Following this, I will turn to the class of discrete-time partially observed mean-field games. Using the technique of converting the original partially observed stochastic control problem (which arises in the infinite population limit when a generic player faces a cloud of players) to a fully observed one on the belief space, and the dynamic programming principle, I will establish the existence of Nash equilibria under quite mild technical conditions. I will again show, as in the perfect local state information case, that the mean-field equilibrium policy, when adopted by each player, forms an approximate Nash equilibrium for games with sufficiently many players.
In the last part of the lecture, I will discuss briefly extensions to risk-sensitive mean-field games with perfect local state as well as partially observed information structures, using a transformation that converts the game in each case to a risk-neutral one. I will also offer some thoughts on extensions to mean-field games with hierarchies.
(Based on joint work with Naci Saldi and Maxim Raginsky)