Skip to main content
Log in

Opinion Dynamics and Stubbornness Via Multi-Population Mean-Field Games

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper studies opinion dynamics for a set of heterogeneous populations of individuals pursuing two conflicting goals: to seek consensus and to be coherent with their initial opinions. The multi-population game under investigation is characterized by (i) rational agents who behave strategically, (ii) heterogeneous populations, and (iii) opinions evolving in response to local interactions. The main contribution of this paper is to encompass all of these aspects under the unified framework of mean-field game theory. We show that, assuming initial Gaussian density functions and affine control policies, the Fokker–Planck–Kolmogorov equation preserves Gaussianity over time. This fact is then used to explicitly derive expressions for the optimal control strategies when the players are myopic. We then explore consensus formation depending on the stubbornness of the involved populations: We identify conditions that lead to some elementary patterns, such as consensus, polarization, or plurality of opinions. Finally, under the baseline example of the presence of a stubborn population and a most gregarious one, we study the behavior of the model with a finite number of players, describing the dynamics of the average opinion, which is now a stochastic process. We also provide numerical simulations to show how the parameters impact the equilibrium formation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. The choice of a logarithmic cost is commonly used when one wishes to describe a so-called crowd-seeking behavior on the part of the players (see, e.g., [14]). In this context, the logarithmic function expresses the fact that the more players reach a consensus on state \(x_i(\cdot )\), the smaller the marginal utility is of each new entrant player with same state \(x_i(\cdot )\).

  2. It is fairly accepted in the context of social interactions to assume that payoffs are linear in the average choice of the population (see, e.g., [19] for a reference contribution in the context of binary choice models).

  3. Being N fixed across simulations, in what follows we suppress the indicator N from the notations.

References

  1. Castellano, C., Fortunato, S., Loreto, V.: Statistical physics of social dynamics. Rev. Mod. Phys. 81, 591–646 (2009)

    Article  Google Scholar 

  2. Acemoğlu, D., Ozdaglar, A.: Opinion dynamics and learning in social networks. Int. Rev. Econ. 1(1), 3–49 (2011)

    MathSciNet  MATH  Google Scholar 

  3. Aeyels, D., Smet, F.D.: A mathematical model for the dynamics of clustering. Phys. D Nonlinear Phenom. 237(19), 2517–2530 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Banerjee, A.V.: A simple model of herd behavior. Q. J. Econ. 107(3), 797–817 (1992)

    Article  Google Scholar 

  5. Krause, U.: A discrete nonlinear and non-autonomous model of consensus formation. In: Elaydi, S., Ladas, G., Popenda, J. and Rakowski, J. (eds.) Communications in Difference Equations, pp. 227–236. Gordon and Breach Publ., Amsterdam, NL (2000)

  6. Hegselmann, R., Krause, U.: Opinion dynamics and bounded confidence models, analysis, and simulations. J. Artif. Soc. Soc. Simul. 5(3), 2 (2002)

    Google Scholar 

  7. Pluchino, A., Latora, V., Rapisarda, A.: Compromise and synchronization in opinion dynamics. Eur. Phys. J. B Condens. Matter Complex Syst. 50(1–2), 169–176 (2006)

    Article  Google Scholar 

  8. Acemoğlu, D., Como, G., Fagnani, F., Ozdaglar, A.: Opinion fluctuations and disagreement in social networks. Math. Oper. Res. 38(1), 1–27 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  9. Como, G., Fagnani, F.: Scaling limits for continuous opinion dynamics systems. Ann. Appl. Probab. 21(4), 1537–1567 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Huang, M., Caines, P., Malhamé, R.: Individual and mass behaviour in large population stochastic wireless power control problems: centralized and Nash equilibrium solutions. In: Proceedings 42nd IEEE Conference on Decision and Control, Maui, HI, pp. 98–103 (2003)

  11. Huang, M., Caines, P., Malhamé, R.: Large population stochastic dynamic games: closed loop Kean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6(3), 221–252 (2006)

    MathSciNet  MATH  Google Scholar 

  12. Huang, M., Caines, P., Malhamé, R.: Large population cost-coupled LQG problems with non-uniform agents: individual-mass behaviour and decentralized \(\epsilon \)-Nash equilibria. IEEE Trans. Autom. Control 52(9), 1560–1571 (2007)

    Article  Google Scholar 

  13. Lasry, J., Lions, P.: Mean field games. Jpn. J. Math. 2, 229–260 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bardi, M.: Explicit solutions of some linear-quadratic mean field games. Netw. Heterog. Media 7, 243–261 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gomes, D., Saúde, J.: Mean field games models—a brief survey. Dyn. Games Appl. 4(2), 110–154 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Adlakha, S., Johari, R.: Mean field equilibrium in dynamic games with strategic complementarities. Oper. Res. 61(4), 971–989 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Bauso, D., Pesenti, R.: Opinion dynamics, stubbornness and mean-field games. In: Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA (2014)

  18. Tembine, H., Zhu, Q., Başar, T.: Risk-sensitive mean-field stochastic differential games. In: Proceedings of 2011 IFAC World Congress, Milan, I (2011)

  19. Brock, W.A., Durlauf, S.: Discrete choice with social interactions. Rev. Econ. Stud. 68(2), 235–260 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  20. Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proc. IEEE 95(1), 215–233 (2007)

    Article  Google Scholar 

  21. Priuli, F.S.: Linear-quadratic \(N\)-person and mean-field games: infinite horizon with discounted cost and singular limits. Technical report. arXiv:1403.4090v1 (2014)

  22. Loparo, K., Feng, X.: Stability of stochastic systems. In: Levine, W. S. (ed.) The Control Handbook, pp. 1105–1126. CRC Press, Boca Raton, FL (1996)

  23. Arnold, L.: Stochastic Differential Equations: Theory and Applications. A Wiley Interscience, Wiley, New York (1974)

    MATH  Google Scholar 

  24. Guéant, O., Lasry, J., Lions, P.: Mean Field Games and Applications. In: Carmona R. et al. (eds.), Paris-Princeton Lectures on Mathematical Finance 2010, Lecture Notes in Mathematics 2003, pp. 205–266. Springer-Verlag, Berlin Heidelberg (2011)

Download references

Acknowledgments

This work was supported by the 2012 “Research Fellow” Program of the Dipartimento di Matematica, Università di Trento and by PRIN 20103S5RN3 “Robust decision making in markets and organizations, 2013–2016.”

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dario Bauso.

Additional information

Communicated by Negash G. Medhin.

Appendix

Appendix

The optimization problem introduced in Sect. 2 can be turned into a Multi-Population Mean-Field Game. Preliminary to the derivation of a mean-field game is the definition of a value function as commonly done also in differential game theory or optimal control. The value function is the value of the optimization problem, carried out by each single player k, starting at time t from state \(x^k\) and for given densities m(t). As we will show, the value function only depends on population’s characteristics (apart from the initial state \(x^k(t)\)).

Proposition 6.1

Consider a generic population i and any agent k such that \(i(k)=i\). Define the value function for agent k as

$$\begin{aligned} v_{i}(x^k(t),t): = \sup _{u^k(\cdot )} {\mathbb {E}}\left\{ \int _t^{\infty }e^{-\rho \tau } c^k(x^k(\tau ), u^k(\tau ), m(\tau )) d \tau )\right\} . \end{aligned}$$

Then, the mean-field system is described by the equations

$$\begin{aligned} \left\{ \begin{array}{l} \partial _t v_i(x^k(t),t) + (1-\alpha _i)\sum _{j\in I} \nu _{ij} \ln (m_j(x^k(t),t)) - \alpha _i (x^k(t) - \mu _{i}(0))^2 + \\ \quad +\frac{1}{2 \beta } (\partial _x v_i(x^k(t),t))^2 + \frac{\xi _i^2}{2} \partial _{xx}^2 v_i(x^k(t),t) -\rho v_i(x^k(t),t)=0, \\ \quad \partial _t m_i(x^k(t),t) +\partial _x\Big [m_i(x^k(t),t) \Big (- \frac{ 1 }{2 \beta } \partial _x v_i(x^k(t),t) \Big ) \Big ] - \frac{1}{2}\xi _i^2 \partial ^2_{xx} m_i(x^k(t),t)\\ \quad =0, \end{array}\right. \nonumber \\ \end{aligned}$$
(36)

for some initial population state distribution \(m_{i}(0)\) for all \(i\in I\). Furthermore, the optimal control is of the form

$$\begin{aligned} u_i^*(x^k(t),t)= - \frac{ 1 }{2 \beta } \partial _x v_i(x^k(t),t). \end{aligned}$$
(37)

Proof

From dynamic programming, the value function can be obtained from a corresponding maximized Hamiltonian function \(H^k\) involving an adjoint variable \(p_i\), called the ith co-state, and given by

$$\begin{aligned} H^k(x,p_i,m)=\sup _{u_i}\left\{ c^k(x,u_i,m)+p_i u_i \right\} . \end{aligned}$$

From [13], the mean-field system associated with the mean-field game introduced in Sect. 2 is given by

$$\begin{aligned}&\partial _t v_i(x,t)+ {H^k}(x,\partial _x v_i(x,t),m) + \frac{1}{2}\xi _i^2 \partial ^2_{xx} v_i(x,t) - \rho v_i(x,t)=0,\nonumber \\&\partial _t m_i(x,t)+\partial _x\Big (m_i(x,t)\partial _p H^k(x,\partial _x v_i(x,t),m) \Big ) - \frac{1}{2}\xi _i^2 \partial ^2_{xx} m_i(x,t)=0,\qquad \end{aligned}$$
(38)

where \(m_i(x,0)=m_{0i}(x)\) for all \(i \in I\) are the initial distributions and where \(x=x^k(t)\).

We first prove condition (37). To this end, let us write the Hamiltonian as:

$$\begin{aligned} {H^k}(x,\partial _x v_i(x,t),m)= & {} \sup _{u_i} \Big \{(1-\alpha _i) \sum _{j \in I} \nu _{ij} \ln (m_j(x,t)) \nonumber \\&- \alpha _i (x - \mu _{0i})^2 - \beta u^2 + \partial _x v_i(x,t) u_i\Big \}=0. \end{aligned}$$
(39)

By differentiating with respect to \(u_i\), we obtain

$$\begin{aligned} 2 \beta u_i(x,t) + \partial _x v_i(x,t) = 0, \end{aligned}$$
(40)

which yields (37). Note that convexity of the cost functional guarantees sufficiency of the above first-order condition.

We now prove (36). Concerning the first equation, which is a PDE corresponding to the Hamilton–Jacobi–Bellman equation, let us replace \(u_i\) in the Hamiltonian (39) by its expression (37), i.e.,

$$\begin{aligned} {H^k}(x,\partial _x v_i(x,t),m)&= (1-\alpha _i)\sum _{j \in I} \nu _{ij}\ln (m_j(x,t)) - \alpha _i (x - \mu _{0i})^2\\&\quad - \beta (u_i^*(x,t))^2 + \partial _x v_i(x,t) u_i^*(x,t) \\&= (1-\alpha _i)\sum _{j \in I} \nu _{ij}\ln (m_j(x,t)) - \alpha _i (x - \mu _{0i})^2 \\&\quad +\frac{1}{2 \beta } (\partial _x v_i(x,t))^2. \end{aligned}$$

Using the above expression of the Hamiltonian in the first equation in (38), we obtain the Hamilton–Jacobi–Bellman equation in (36).

To prove the second equation, which is a PDE representing the Fokker–Planck–Kolmogorov equation, we simply substitute (37) into the second equation in (38), and this concludes the proof. \(\square \)

The significance of the above result is that to find the optimal controls, we need to solve the set of coupled PDEs defined in (36) with given boundary conditions. This can done by iteratively solving the Hamilton–Jacobi–Bellman equation for fixed \(m_i\) and by entering the optimal \(u_i\) obtained from (37) in the Fokker–Planck–Kolmogorov equation, until a fixed point in \(v_i\) and \(m_i\) is reached [24]. In other words, it must be proved that such a map is a contraction, and to do this, we rely on compactness of the map itself and on the Schauder fixed point theorem [15]. Note that, in Proposition 6.1, we do not consider a stationary control or a stationary population density distribution, although we deal with a discounted objective function over an infinite horizon. In fact, we are interested in determining the evolution of the population density distribution function over time under the general hypothesis that at time 0 the population is not distributed according to the long-term equilibrium density distribution.

A solution of (36) is said Nash mean-field equilibrium as it involves a set \(\{(m^*_i,u^*_i): i \in I\}\) of functions defined for all times \(t\ge 0\) such that

$$\begin{aligned} (m^*_i,u^*_i)= & {} \text {arg}\sup _{m_i(.),u_i(.)} {\mathbb {E}}\left\{ \int _0^{\infty }e^{-\rho t} c^k(x^k, u_i, m) \hbox {d}t\Big |m_j = m^*_j, \forall j \in I \setminus \{i\}\right\} \\ \forall i\in & {} I. \end{aligned}$$

In other words, any player of population i does not benefit from changing its control policy \(u_i^*\) if the control policies, and therefore also the distributions, of the other populations are fixed to, respectively, \(u_j^*\) and \(m_j^*\) for all \(j \in I \setminus \{i\}\). As a consequence, also the trajectory over time of the distribution \(m_i^*\) is unchanged.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bauso, D., Pesenti, R. & Tolotti, M. Opinion Dynamics and Stubbornness Via Multi-Population Mean-Field Games. J Optim Theory Appl 170, 266–293 (2016). https://doi.org/10.1007/s10957-016-0874-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-016-0874-5

Keywords

Mathematics Subject Classification

Navigation