Abstract
This paper studies opinion dynamics for a set of heterogeneous populations of individuals pursuing two conflicting goals: to seek consensus and to be coherent with their initial opinions. The multi-population game under investigation is characterized by (i) rational agents who behave strategically, (ii) heterogeneous populations, and (iii) opinions evolving in response to local interactions. The main contribution of this paper is to encompass all of these aspects under the unified framework of mean-field game theory. We show that, assuming initial Gaussian density functions and affine control policies, the Fokker–Planck–Kolmogorov equation preserves Gaussianity over time. This fact is then used to explicitly derive expressions for the optimal control strategies when the players are myopic. We then explore consensus formation depending on the stubbornness of the involved populations: We identify conditions that lead to some elementary patterns, such as consensus, polarization, or plurality of opinions. Finally, under the baseline example of the presence of a stubborn population and a most gregarious one, we study the behavior of the model with a finite number of players, describing the dynamics of the average opinion, which is now a stochastic process. We also provide numerical simulations to show how the parameters impact the equilibrium formation.
Similar content being viewed by others
Notes
The choice of a logarithmic cost is commonly used when one wishes to describe a so-called crowd-seeking behavior on the part of the players (see, e.g., [14]). In this context, the logarithmic function expresses the fact that the more players reach a consensus on state \(x_i(\cdot )\), the smaller the marginal utility is of each new entrant player with same state \(x_i(\cdot )\).
It is fairly accepted in the context of social interactions to assume that payoffs are linear in the average choice of the population (see, e.g., [19] for a reference contribution in the context of binary choice models).
Being N fixed across simulations, in what follows we suppress the indicator N from the notations.
References
Castellano, C., Fortunato, S., Loreto, V.: Statistical physics of social dynamics. Rev. Mod. Phys. 81, 591–646 (2009)
Acemoğlu, D., Ozdaglar, A.: Opinion dynamics and learning in social networks. Int. Rev. Econ. 1(1), 3–49 (2011)
Aeyels, D., Smet, F.D.: A mathematical model for the dynamics of clustering. Phys. D Nonlinear Phenom. 237(19), 2517–2530 (2008)
Banerjee, A.V.: A simple model of herd behavior. Q. J. Econ. 107(3), 797–817 (1992)
Krause, U.: A discrete nonlinear and non-autonomous model of consensus formation. In: Elaydi, S., Ladas, G., Popenda, J. and Rakowski, J. (eds.) Communications in Difference Equations, pp. 227–236. Gordon and Breach Publ., Amsterdam, NL (2000)
Hegselmann, R., Krause, U.: Opinion dynamics and bounded confidence models, analysis, and simulations. J. Artif. Soc. Soc. Simul. 5(3), 2 (2002)
Pluchino, A., Latora, V., Rapisarda, A.: Compromise and synchronization in opinion dynamics. Eur. Phys. J. B Condens. Matter Complex Syst. 50(1–2), 169–176 (2006)
Acemoğlu, D., Como, G., Fagnani, F., Ozdaglar, A.: Opinion fluctuations and disagreement in social networks. Math. Oper. Res. 38(1), 1–27 (2013)
Como, G., Fagnani, F.: Scaling limits for continuous opinion dynamics systems. Ann. Appl. Probab. 21(4), 1537–1567 (2011)
Huang, M., Caines, P., Malhamé, R.: Individual and mass behaviour in large population stochastic wireless power control problems: centralized and Nash equilibrium solutions. In: Proceedings 42nd IEEE Conference on Decision and Control, Maui, HI, pp. 98–103 (2003)
Huang, M., Caines, P., Malhamé, R.: Large population stochastic dynamic games: closed loop Kean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6(3), 221–252 (2006)
Huang, M., Caines, P., Malhamé, R.: Large population cost-coupled LQG problems with non-uniform agents: individual-mass behaviour and decentralized \(\epsilon \)-Nash equilibria. IEEE Trans. Autom. Control 52(9), 1560–1571 (2007)
Lasry, J., Lions, P.: Mean field games. Jpn. J. Math. 2, 229–260 (2007)
Bardi, M.: Explicit solutions of some linear-quadratic mean field games. Netw. Heterog. Media 7, 243–261 (2012)
Gomes, D., Saúde, J.: Mean field games models—a brief survey. Dyn. Games Appl. 4(2), 110–154 (2014)
Adlakha, S., Johari, R.: Mean field equilibrium in dynamic games with strategic complementarities. Oper. Res. 61(4), 971–989 (2013)
Bauso, D., Pesenti, R.: Opinion dynamics, stubbornness and mean-field games. In: Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA (2014)
Tembine, H., Zhu, Q., Başar, T.: Risk-sensitive mean-field stochastic differential games. In: Proceedings of 2011 IFAC World Congress, Milan, I (2011)
Brock, W.A., Durlauf, S.: Discrete choice with social interactions. Rev. Econ. Stud. 68(2), 235–260 (2001)
Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proc. IEEE 95(1), 215–233 (2007)
Priuli, F.S.: Linear-quadratic \(N\)-person and mean-field games: infinite horizon with discounted cost and singular limits. Technical report. arXiv:1403.4090v1 (2014)
Loparo, K., Feng, X.: Stability of stochastic systems. In: Levine, W. S. (ed.) The Control Handbook, pp. 1105–1126. CRC Press, Boca Raton, FL (1996)
Arnold, L.: Stochastic Differential Equations: Theory and Applications. A Wiley Interscience, Wiley, New York (1974)
Guéant, O., Lasry, J., Lions, P.: Mean Field Games and Applications. In: Carmona R. et al. (eds.), Paris-Princeton Lectures on Mathematical Finance 2010, Lecture Notes in Mathematics 2003, pp. 205–266. Springer-Verlag, Berlin Heidelberg (2011)
Acknowledgments
This work was supported by the 2012 “Research Fellow” Program of the Dipartimento di Matematica, Università di Trento and by PRIN 20103S5RN3 “Robust decision making in markets and organizations, 2013–2016.”
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Negash G. Medhin.
Appendix
Appendix
The optimization problem introduced in Sect. 2 can be turned into a Multi-Population Mean-Field Game. Preliminary to the derivation of a mean-field game is the definition of a value function as commonly done also in differential game theory or optimal control. The value function is the value of the optimization problem, carried out by each single player k, starting at time t from state \(x^k\) and for given densities m(t). As we will show, the value function only depends on population’s characteristics (apart from the initial state \(x^k(t)\)).
Proposition 6.1
Consider a generic population i and any agent k such that \(i(k)=i\). Define the value function for agent k as
Then, the mean-field system is described by the equations
for some initial population state distribution \(m_{i}(0)\) for all \(i\in I\). Furthermore, the optimal control is of the form
Proof
From dynamic programming, the value function can be obtained from a corresponding maximized Hamiltonian function \(H^k\) involving an adjoint variable \(p_i\), called the ith co-state, and given by
From [13], the mean-field system associated with the mean-field game introduced in Sect. 2 is given by
where \(m_i(x,0)=m_{0i}(x)\) for all \(i \in I\) are the initial distributions and where \(x=x^k(t)\).
We first prove condition (37). To this end, let us write the Hamiltonian as:
By differentiating with respect to \(u_i\), we obtain
which yields (37). Note that convexity of the cost functional guarantees sufficiency of the above first-order condition.
We now prove (36). Concerning the first equation, which is a PDE corresponding to the Hamilton–Jacobi–Bellman equation, let us replace \(u_i\) in the Hamiltonian (39) by its expression (37), i.e.,
Using the above expression of the Hamiltonian in the first equation in (38), we obtain the Hamilton–Jacobi–Bellman equation in (36).
To prove the second equation, which is a PDE representing the Fokker–Planck–Kolmogorov equation, we simply substitute (37) into the second equation in (38), and this concludes the proof. \(\square \)
The significance of the above result is that to find the optimal controls, we need to solve the set of coupled PDEs defined in (36) with given boundary conditions. This can done by iteratively solving the Hamilton–Jacobi–Bellman equation for fixed \(m_i\) and by entering the optimal \(u_i\) obtained from (37) in the Fokker–Planck–Kolmogorov equation, until a fixed point in \(v_i\) and \(m_i\) is reached [24]. In other words, it must be proved that such a map is a contraction, and to do this, we rely on compactness of the map itself and on the Schauder fixed point theorem [15]. Note that, in Proposition 6.1, we do not consider a stationary control or a stationary population density distribution, although we deal with a discounted objective function over an infinite horizon. In fact, we are interested in determining the evolution of the population density distribution function over time under the general hypothesis that at time 0 the population is not distributed according to the long-term equilibrium density distribution.
A solution of (36) is said Nash mean-field equilibrium as it involves a set \(\{(m^*_i,u^*_i): i \in I\}\) of functions defined for all times \(t\ge 0\) such that
In other words, any player of population i does not benefit from changing its control policy \(u_i^*\) if the control policies, and therefore also the distributions, of the other populations are fixed to, respectively, \(u_j^*\) and \(m_j^*\) for all \(j \in I \setminus \{i\}\). As a consequence, also the trajectory over time of the distribution \(m_i^*\) is unchanged.
Rights and permissions
About this article
Cite this article
Bauso, D., Pesenti, R. & Tolotti, M. Opinion Dynamics and Stubbornness Via Multi-Population Mean-Field Games. J Optim Theory Appl 170, 266–293 (2016). https://doi.org/10.1007/s10957-016-0874-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-016-0874-5