Skip to main content
Log in

Dangerous tangents: an application of \(\Gamma \)-convergence to the control of dynamical systems

  • Published:
Decisions in Economics and Finance Aims and scope Submit manuscript

Abstract

Inspired by the classical riot model proposed by Granovetter in 1978, we consider a parametric stochastic dynamical system that describes the collective behavior of a large population of interacting agents. By controlling a parameter, a policy maker seeks to minimize her own disutility, which in turn depends on the steady state of the system. We show that this economically sensible optimization is ill-posed and illustrate a novel way to tackle this practical and formal issue. Our approach is based on the \(\Gamma \)-convergence of a sequence of mean-regularized instances of the original problem. The corresponding minimum points converge toward a unique value that intuitively is the solution of the original ill-posed problem. Notably, to the best of our knowledge, this is one of the first applications of \(\Gamma \)-convergence in economics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Formally, this means that \(\rho (\sigma )=\lim _{t\rightarrow \infty } F^t(r_0; \sigma )\), where \(F^t\) is the composition of F with itself for t times.

  2. A brief description of saddle-node bifurcations is given in Appendix A. See Strogatz (2015) for an exhaustive analysis of bifurcations in dynamical systems.

  3. We refer the reader to Appendix B for a brief recap on \(\Gamma \)-convergence and to Braides (2002) for an extensive description.

  4. The Gaussian distribution with \(\mu =0.25\) was suggested by Granovetter (1978). Note that both the lower and the upper bound \(\sigma _{\min }\) and \(\sigma _{\max }\) are imposed for technical reasons related to the proof of the \(\Gamma \)-convergence.

  5. We are interested in studying the case where the initial condition is zero (or small) as a benchmark for applications where the social phenomenon is analyzed from its outset.

  6. In this respect, \(\rho (\sigma )\) can be interpreted as the lowest fixed point function in the sense of Milgrom and Roberts (1994).

  7. For a formal analysis of saddle-node bifurcations and related facts, we refer the reader to Lemma A.1 in Appendix A.

  8. See Granovetter (1978, pp. 1427–1428).

  9. See Allen and Sanglier (1979, p. 257).

  10. More details on such threshold levels for k and \(\mu \) are provided in Appendix A.

  11. All proofs are given in Appendix A.

  12. We could also consider the quantity \(r^{(i)}_N=\frac{\sum _{j\ne i}y_j}{N-1}\). When N becomes large (infinite), the contribution of \(y_i\) is negligible, thus the two problems have exactly the same limiting behavior.

  13. The proof of this rather classical result is omitted. We refer the reader to Ethier and Kurtz (2009) for more details.

  14. As optimize typically evaluates the objective function about 20 times, the sampled thresholds exceed \(10^{10}\).

  15. See Sect. 3.1 in Strogatz (2015) for a general discussion on saddle-node bifurcations. Figure 3.1.7. provides an example that is similar in spirit to our situation.

  16. To ease readability, in the remainder of the proof we omit to say that the inequalities dealing with random variables hold almost surely (i.e., with probability one).

  17. If \(\sigma =\sigma _c\), no matter of the value of \(\varepsilon \), it is not possible to identify a strict inequality due to the fact that F is tangent to the bisector line at the value \(\rho (\sigma )\) when \(\sigma =\sigma _c\).

  18. Obviously \(\delta <\min \{\sigma _c-\sigma _{\min }\,, \sigma _{\max }-\sigma _c\}\), otherwise the statement is meaningless.

  19. See, for example, Corollary 3 in Ford and Pennline (2007).

  20. See Definition B.7. In particular, our sequence \((f_N)_N\) is equi-mildly coercive since all functions \(f_N\) are bounded from below. All infima are reached in \([\sigma _{\min }, \sigma _{\max }]\), which is compact.

  21. This latter relation is due to the fact that, when the solution \(x_0\) to \(G(x,\sigma )=0\) is unique, i.e., for \(\sigma >\sigma _c\), then \(x_0>\mu \).

References

  • Allen, P.M., Sanglier, M.: A dynamic model of growth in a central place system. Geogr. Anal. 11(3), 256–272 (1979)

    Article  Google Scholar 

  • Barucci, E., Tolotti, M.: Social interaction and conformism in a random utility model. J. Econ. Dyn. Control 36(12), 1855–1866 (2012a)

    Article  Google Scholar 

  • Barucci, E., Tolotti, M.: Identity, reputation and social interaction with an application to sequential voting. J. Econ. Interac. Coord. 7(1), 79–98 (2012b)

    Article  Google Scholar 

  • Bass, F.M.: A new product growth for model consumer durables. Manage. Sci. 15(5), 215–227 (1969)

    Article  Google Scholar 

  • Blume, L., Durlauf, S.: Equilibrium concepts for social interaction models. Int. Game Theory Rev. 5(03), 193–209 (2003)

    Article  Google Scholar 

  • Braides, A.: \(\Gamma \)-Convergence for Beginners, Oxford Lecture Series in Mathematics and its Applications, 22. Oxford University Press, Oxford (2002)

    Google Scholar 

  • Carlier, G.: A general existence result for the principal-agent problem with adverse selection. J. Math. Econom. 35(1), 129–150 (2001)

    Article  Google Scholar 

  • Ethier, S.N., Kurtz, T.G.: Markov Processes: Characterization and Convergence (Vol. 282). Wiley (2009)

  • Ford, W.F., Pennline, J.A.: When does convergence in the mean imply uniform convergence? Am. Math. Mon. 114(1), 58–60 (2007)

    Article  Google Scholar 

  • Ghisi, M., Gobbino, M.: The monopolist’s problem: existence, relaxation, and approximation. Calc. Var. Partial Differ. Equ. 24(1), 111–129 (2005)

    Article  Google Scholar 

  • Gordon, M., Nadal, J.-P., Phan, D., Semeshenko, V.: Entanglement between demand and supply in markets with bandwagon goods. J. Stat. Phys. 151(3–4), 494–522 (2013)

    Article  Google Scholar 

  • Granovetter, M.: Threshold models of collective behavior. Am. J. Sociol. 83(6), 1420–1443 (1978)

    Article  Google Scholar 

  • Milgrom, P., Roberts, J.: Comparing equilibria. Am. Econ. Rev., 441–459 (1994)

  • Monteiro, P., Page, F.H., Jr.: Optimal selling mechanisms for multiproduct monopolists: incentive compatibility in the presence of budget constraints. J. Math. Econom. 30(4), 473–502 (1998)

    Article  Google Scholar 

  • Nadal, J.-P., Phan, D., Gordon, M.B., Vannimenus, J.: Multiple equilibria in a monopoly market with heterogeneous agents and externalities. Quant. Finance 5(6), 557–568 (2005)

    Article  Google Scholar 

  • Peres, R., Muller, E., Mahajan, V.: Innovation diffusion and new product growth models: a critical review and research directions. Int. J. Res. Mark. 27(2), 91–106 (2010)

    Article  Google Scholar 

  • R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/ (2018)

  • Rochet, J.C., Choné, P.: Ironing, sweeping and multidimensional screening. Econometrica 66, 783–826 (1998)

    Article  Google Scholar 

  • Schelling, T.C.: Dynamic models of segregation. J. Math. Sociol. 1(2), 143–186 (1971)

    Article  Google Scholar 

  • Strogatz, S.H.: Nonlinear Dynamics and Chaos. With Applications to Physics, Biology, Chemistry, and Engineering. Second edition. Westview Press, Boulder, CO (2015)

  • Sundaram, R.K.: A First Course in Optimization Theory. Cambridge University Press, Cambridge (1996)

Download references

Acknowledgements

We thank Marco LiCalzi for his insightful discussions and comments. Paolo Dai Pra brought our attention to \(\Gamma \)-convergence. The work was funded in part by the ITN ExSIDE European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 721846.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rosario Maggistro.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs

All proofs are given in this appendix.

1.1 Proof of Lemma 2.1

We first state and prove a technical result on bifurcation theory of a dynamical system.

Lemma A.1

(Saddle-node bifurcation) Consider the dynamical system

$$\begin{aligned} x(t+1)=F(x(t);\sigma ); \quad x(0)=x_0\in [0,1],\end{aligned}$$
(14)

where \(x \in [0,1]\) and \(F(\cdot \,;\sigma )\) is a continuous distribution function with standard deviation \(\sigma \), admitting a unimodal density function. The set of steady states of (14) is given by the solutions to

$$\begin{aligned} x=F(x;\sigma ).\end{aligned}$$
(15)

There exists a threshold level \(\sigma _c\), such that:

  1. i)

    if \(\sigma >\sigma _c\), (15) admits a unique solution x;

  2. ii)

    if \(\sigma =\sigma _c\), (15) admits two solutions \(x^l<x^h\);

  3. iii)

    if \(\sigma <\sigma _c\), (15) admits three solutions \(x^l<x^m<x^h\).

If the solution x is unique, it is a globally stable attractor for the dynamical system in (14). In case of three equilibria, if \(x_0<x^m\), then

$$\begin{aligned} \lim _{t\rightarrow \infty } x(t)=x^l. \end{aligned}$$

In the opposite case, if \(x_0>x^m\), then

$$\begin{aligned} \lim _{t\rightarrow \infty } x(t)=x^h. \end{aligned}$$

Proof

Note that \(F(0;\sigma )>0\) and \(F(1; \sigma )<1\) for any \(\sigma \). Therefore, at least one solution x to (15) exists. Since F is S-shaped, it is convex for small x and concave for large x. A maximum of three solutions to (15) can therefore appear. The number of solutions to (15) depends on \(\sigma \). This is an example of a saddle-node bifurcation.Footnote 15 By definition, \(\sigma _c\) identifies the unique situation in which F is tangent to the bisector line at some point. In this case, exactly two different solutions to (15) exist. As soon as we take a value \(\sigma >\sigma _c\), such a tangency point disappears. In the opposite case, for \(\sigma <\sigma _c\), there are three intersections. We check the stability of the steady states by looking at the linearized version of the system (14). In this way, it is not difficult to see that if the equilibrium \({{\bar{x}}}\) is unique, then \(F'({{\bar{x}}};\sigma )<1\), so it is linearly stable. In case of three equilibria, \(F'(x^m;\sigma )>1\), whereas \(F'(x^l;\sigma )<1\) and \(F'(x^h;\sigma )<1\). \(\square \)

Returning to the proof of Lemma (2.1), we are exactly in this situation, since F is Gaussian. The graph of the distribution function F intersects the bisector either three times, twice, or once. A visual representation of the three different cases is reported in Fig. 1. When \(\sigma =\sigma _c\), we are in the tangency situation: the graph of F intersects the bisector line at one point \(x^l\), when the curves are tangent, and at one second point \(x^h\), when they are not. If the expected value \(\mu \) of the distribution is such that \(\mu < 1/2\), then it is easy to see that \(x^l<1/2<x^h\). Consider finally that we take \(r_0=0\) so that the dynamical system always converges to \(x^l\), that is, the smallest among the possible solutions to (15). Let us call this equilibrium \(\rho (\sigma )\).

The continuity of \(f(\sigma )\) on \([\sigma _{\min },\sigma _c]\) and on \((\sigma _c, \sigma _{\max }]\) immediately follows from the continuity of the map \(\sigma \mapsto \rho (\sigma )\) on the same intervals. However, the proof of this latter property is not trivial at all, and is postponed to dedicated Appendix C.

It remains to show that \(f(\sigma )=k\sigma - \rho (\sigma )\) is bounded from below and that the infimum of f is exactly \({{\tilde{f}}}\) defined as \(\lim _{\sigma \rightarrow \sigma _c^+}f(\sigma )\). It is convenient to separate the study on the two intervals \([\sigma _{\min }, \sigma _c]\) and \((\sigma _c, \sigma _{\max }]\). Concerning the latter, we show that on this interval, \(\sigma \mapsto \rho (\sigma )\) is decreasing, so that f is increasing on the same interval. As noted above, when \(\sigma >\sigma _c\) there exists a unique x solving (15), with \(x>\mu \). Moreover, still for \(\sigma >\sigma _c\) and \(x>\mu \), \(F(x;\sigma )\) is concave and increasing. Let \(\sigma _1, \sigma _2 \in (\sigma _c, \sigma _{\max }]\) be such that \(\sigma _1<\sigma _2\). Then \(F(\cdot ;{\sigma _1})\) and \(F(\cdot ;{\sigma _2})\) satisfy

$$\begin{aligned} F(x; {\sigma _2})<F(x; {\sigma _1} ) \quad \ \forall \, x>\mu . \end{aligned}$$
(16)

Let us call \(\xi _1\) the unique solution to \(F(x; {\sigma _1})=x\). Then, by (16),

$$\begin{aligned} F(\xi _1; {\sigma _2})<F(\xi _1; {\sigma _1})=\xi _1. \end{aligned}$$

Since \(F(\cdot ; {\sigma _2})\) is increasing, there exists \( \xi _2< \xi _1:F(\xi _2; {\sigma _2})=\xi _2\). Note that \(\xi _1=\rho (\sigma _1)\) and \(\xi _2=\rho (\sigma _2)\). Therefore, \(\rho (\sigma )\) is decreasing on \((\sigma _c, \sigma _{\max }]\). Concerning the interval \([\sigma _{\min },\sigma _c]\), a similar argument shows that, in this case, \(\sigma \mapsto \rho (\sigma )\) is increasing up to a level \(\rho (\sigma _c)\) such that \(\lim _{\sigma \rightarrow \sigma _c^+} \rho (\sigma ):={{\tilde{r}}}_c>\rho (\sigma _c)\). Since, by assumption,

$$\begin{aligned} k<k^{th}:=\frac{\tilde{r}_c-\rho (\sigma _c)}{\sigma _c}, \end{aligned}$$

we have that:

  1. 1.

    on \((\sigma _{\min },\sigma _c)\),

    $$\begin{aligned} k<\frac{{{\tilde{r}}}_c-\rho (\sigma _c)}{\sigma _c}< \frac{{{\tilde{r}}}_c-\rho (\sigma _c)}{\sigma _c-\sigma }<\frac{{{\tilde{r}}}_c-\rho (\sigma )}{\sigma _c-\sigma }, \end{aligned}$$

    where the latter is due to the monotonicity of \(\rho \). Thus,

    $$\begin{aligned} k< \frac{{{\tilde{r}}}_c-\rho (\sigma )}{\sigma _c-\sigma }\iff k(\sigma _c-\sigma )<{{\tilde{r}}}_c-\rho (\sigma )\iff k\sigma _c -\tilde{r}_c<k\sigma -\rho (\sigma ); \end{aligned}$$
  2. 2.

    for \(\sigma =\sigma _{\min }\), \(k\sigma _c -{{\tilde{r}}}_c<k\sigma _{\min }-\rho (\sigma _{\min }):=f(\sigma _{\min });\)

  3. 3.

    for \(\sigma =\sigma _c \), \(-{{\tilde{r}}}_c< -\rho (\sigma _c)\) and hence \({{\tilde{f}}}<f(\sigma _c)\).

Summarizing, we have proved that: (i) f is left-continuous with a discontinuity at \(\sigma _c\); (ii) the function f is bounded from below and admits a finite infimum \({{\tilde{f}}}\), which is not a minimum. \(\square \)

1.2 Proof of Theorem 3.3

We first state and prove five technical lemmas related to \(R_N\) as defined in (11) and to \(\rho _N(\sigma )\) as defined in (12).

Lemma A.2

For each N, \(R_N\) is a measurable and bounded function of the finite sample \({{\tilde{X}}}\equiv (X_1, \dots , X_N)\). Moreover, the function \(\sigma \mapsto \rho _N(\sigma )\) is continuous.

Proof

$$\begin{aligned} \rho _N(\sigma )={\mathbb {E}}^\sigma [R_N]=\int _{supp({{\tilde{X}}})} R_N({{\tilde{x}}}) \, d{{\tilde{F}}}({{\tilde{x}}};\sigma ),\end{aligned}$$
(17)

where \({{\tilde{F}}}( {{\tilde{x}}} ;\sigma )=\Pi _{i=1}^N F(x_i;\sigma )\) and where \({{\tilde{x}}}=(x_1,\dots ,x_N)\) is a realization of the sample \({{\tilde{X}}}\). Note that the integrand function \(R_N({{\tilde{x}}})\) is a measurable and bounded function of the sample. As a consequence, the integral is well-defined; moreover, it is continuous in \(\sigma \) as soon as F is continuous in \(\sigma \). \(\square \)

Lemma A.3

For any \(\sigma \ne \sigma _c\) and \(\varepsilon >0\),

$$\begin{aligned} \lim _{N\rightarrow \infty } {\mathbb {P}}\left( |R_N-\rho (\sigma )|>\varepsilon \right) =0, \end{aligned}$$

where \(\rho (\sigma )=\min \{ x:F(x;\sigma )=x \}\). Moreover,

$$\begin{aligned} {\mathbb {E}}^\sigma [|R_N-\rho (\sigma )|] \rightarrow 0. \end{aligned}$$

As a consequence, \(\rho _N(\sigma ):={\mathbb {E}}^\sigma [R_N] \rightarrow \rho (\sigma )\).

Proof

Fix \(\sigma \ne \sigma _c\). We show separately that, for N large enough and with probability one, \(R_N<\rho (\sigma )+\varepsilon \) and \(R_N>\rho (\sigma )-\varepsilon \) for any \(\varepsilon > 0\). We start with the former inequality.Footnote 16 To this end, we consider an alternative and equivalent definition for \(R_N\):

$$\begin{aligned} R_N=\min \{x:F_N(x;\sigma )=x\}. \end{aligned}$$

We show that there exists \(\varepsilon _0>0\) such that for all \(\varepsilon >\varepsilon _0\), there exists N such that \(F_N(x;\sigma )<x\) for \(x=\rho (\sigma )+\varepsilon \).Footnote 17 This latter inequality states exactly that \(R_N<\rho (\sigma )+\varepsilon \). By way of contradiction, suppose that there exists \(\varepsilon >0\) such that for all N, \(F_N(\rho (\sigma )+\varepsilon ;\sigma )\ge \rho (\sigma )+\varepsilon \); then

$$\begin{aligned} F_N(\rho (\sigma )+\varepsilon ;\sigma )\ge \rho (\sigma )+\varepsilon >F(\rho (\sigma );\sigma ), \end{aligned}$$

where the latter inequality comes from the fact that \(F(\rho (\sigma ); \sigma )=\rho (\sigma )\). Now this is a contradiction, since F is continuous in its first argument and \(\sup _x |F_N(x;\sigma )-F(x;\sigma )|\rightarrow 0\) by virtue of the classical Glivenko–Cantelli Theorem.

To prove that \(R_N>\rho (\sigma )-\varepsilon \), we use a similar argument. We show that there exist \(\varepsilon _0>0\) such that for all \(\varepsilon >\varepsilon _0\), there exists N such that \(F_{N}(x;\sigma )>x\) for \(x=\rho (\sigma )-\varepsilon \). By way of contradiction, suppose that there exists \(\varepsilon >0\) such that for all N, \(F_N(x;\sigma )\le x\); then

$$\begin{aligned} F_N(\rho (\sigma )-\varepsilon ;\sigma )\le \rho (\sigma )-\varepsilon <F(\rho (\sigma ); \sigma ). \end{aligned}$$

Finally, note that for all \(x\le \rho (\sigma )-\varepsilon \), there exists \({{\bar{N}}}\) such that \(F_{{{\bar{N}}}}(x; \sigma )>x\) for sure. Suppose there exists \({{\tilde{x}}}<\rho (\sigma )-\varepsilon \) such that \(F_{N}({{\tilde{x}}};\sigma )\le {{\tilde{x}}}\) for all N. Since \(F(\tilde{x}; \sigma )>{{\tilde{x}}}\), again we find a contradiction for the Glivenko–Cantelli Theorem. Therefore, \(R_N>\rho (\sigma )-\varepsilon \) for sure. Note now that the sequence of random variables \(R_N\) is uniformly bounded by Lebesgue’s dominated convergence theorem, \({\mathbb {E}}^\sigma [|R_N-\rho (\sigma )|] \rightarrow 0\) and we obtain convergence in mean. \(\square \)

Lemma A.4

The sequence of derivatives \((\rho '_N(\sigma ))_{N}\) exists and is uniformly bounded on \([\sigma _{\min }, \sigma _{\max }]\).

Proof

We use the expression for \(\rho _N\) as in (17) and derive it w.r.t. \(\sigma \). To this end, note that the random variable \(R_N\) does not explicitly depend on \(\sigma \); we recall, moreover, that

$$\begin{aligned} \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma )=\frac{1}{\sqrt{2\pi \sigma ^2}} \, {e^{-\frac{1}{2}{\frac{({\tilde{x}}-\mu )^2}{\sigma ^2}}}}\hbox {d}{\tilde{x}}. \end{aligned}$$

A simple calculation gives:

$$\begin{aligned} \frac{\hbox {d}\rho _N(\sigma )}{\hbox {d}\sigma } = -\frac{1}{\sigma }\rho _N(\sigma ) + \frac{1}{\sigma ^3} \int _{\mathrm{supp}({{\tilde{X}}})} R_N({{\tilde{x}}}) (\tilde{x}-\mu )^2 \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma ). \end{aligned}$$

Therefore,

$$\begin{aligned} \left| \frac{\hbox {d}\rho _N(\sigma )}{\hbox {d}\sigma } \right| \le \left| \frac{1}{\sigma }\rho _N(\sigma ) \right| +\left| \frac{1}{\sigma ^3} \int _{\mathrm{supp}({{\tilde{X}}})} R_N({{\tilde{x}}}) (\tilde{x}-\mu )^2 \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma )\right| . \end{aligned}$$
(18)

Concerning the integral, since \(R_N({{\tilde{x}}})({{\tilde{x}}}-\mu )^2\le R_N^2({{\tilde{x}}}) + ({{\tilde{x}}}-\mu )^4\), we have

$$\begin{aligned}&0\le \int _{\mathrm{supp}({{\tilde{X}}})} R_N({{\tilde{x}}}) ({{\tilde{x}}}-\mu )^2 \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma ) \\&\quad \le \int _{\mathrm{supp}({{\tilde{X}}})} R_N^2({{\tilde{x}}}) \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma ) + \int _{\mathrm{supp}({{\tilde{X}}})} ({{\tilde{x}}}-\mu )^4 \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma ). \end{aligned}$$

Returning to (18), and noting that all the expressions on the r.h.s. are positive, we have

$$\begin{aligned} \left| \frac{\hbox {d}\rho _N(\sigma )}{\hbox {d}\sigma } \right| \le \frac{\rho _N(\sigma )}{\sigma } + \frac{1}{\sigma ^3}\int _{\mathrm{supp}({{\tilde{X}}})} R_N^2({{\tilde{x}}}) \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma ) + \frac{1}{\sigma ^3} \int _{\mathrm{supp}({{\tilde{X}}})} ({{\tilde{x}}}-\mu )^4 \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma ). \end{aligned}$$

Since \(\rho _N(\sigma )\le 1\), \(R_N^2\le 1\) for all N, and

$$\begin{aligned} \int _{\mathrm{supp}({{\tilde{X}}})} ({{\tilde{x}}}-\mu )^4 \hbox {d}{{\tilde{F}}}({{\tilde{x}}};\sigma )=\sigma ^4 Kurt({{\tilde{X}}})=3\sigma ^4, \end{aligned}$$

we have that

$$\begin{aligned} \left| \frac{\hbox {d}\rho _N(\sigma )}{\hbox {d}\sigma } \right| \le \frac{1}{\sigma } + \frac{1}{\sigma ^3} + 3\sigma \le K, \end{aligned}$$

where K is a suitable constant, possibly depending on \(\sigma _{\min }\) and \(\sigma _{\max }\) but independent of \(\sigma \).

\(\square \)

Lemma A.5

The sequence \((\rho _N)_N\) converges uniformly to \(\rho \) on the two disjoint intervals \([\sigma _{\min }, \sigma _c-\delta ]\) and \([\sigma _c+\delta ,\sigma _{\max }]\) for every \(\delta >0\)Footnote 18.

Proof

We use the fact that if a sequence of continuous real-valued functions \((F_n)_n\) converges in \(L^p\) for some \(p \in [1, \,+\infty )\) on a closed and finite interval to a limit function F and, in addition, the sequence of derivatives \(F'_n\) exists and is uniformly bounded, then the sequence converges to F also uniformly.Footnote 19 We apply this result to the sequence \((\rho _N)_N\). Convergence in \(L^1\) on the disjoint intervals follows from Lemma A.3. The fact that the sequence of derivatives \(\rho '_N\) is uniformly bounded on the entire domain \([\sigma _{\min },\sigma _{\max }]\) has been proved in Lemma A.4. \(\square \)

Lemma A.6

For all \(\varepsilon >0\) and for N large enough,

$$\begin{aligned} \rho _N(\sigma _c)\le {{\tilde{r}}}_c +\varepsilon . \end{aligned}$$

Proof

Fix \(\varepsilon >0\), there exist \(\delta >0\), such that

$$\begin{aligned}|\rho _N(\sigma _c) - {{\tilde{r}}}_c| \le \varepsilon _\delta <\varepsilon .\end{aligned}$$

Take \(\sigma =\sigma _c+\delta \), \(\delta >0\). Then

$$\begin{aligned}&|\rho _N(\sigma _c) - {{\tilde{r}}}_c| \le |\rho _N(\sigma _c) - \rho _N(\sigma _c+\delta )|+|\rho _N(\sigma _c+\delta )- {{\tilde{r}}}_c| \\&\quad \le |\rho _N(\sigma _c) - \rho _N(\sigma _c+\delta )|+|\rho _N(\sigma _c+\delta )- \rho (\sigma _c+\delta ) | +| \rho (\sigma _c+\delta ) - \tilde{r}_c|\\&\quad \le \varepsilon _\delta := \varepsilon _\delta ^{(1)}+\varepsilon _\delta ^{(2)}+\varepsilon _\delta ^{(3)}, \end{aligned}$$

where

$$\begin{aligned}|\rho _N(\sigma _c) - \rho _N(\sigma _c+\delta )|\le \varepsilon _\delta ^{(1)} \end{aligned}$$

is due to the continuity of \(\rho _N\),

$$\begin{aligned}|\rho _N(\sigma _c+\delta )- \rho (\sigma _c+\delta ) | \le \varepsilon _\delta ^{(2)} \end{aligned}$$

follows from the fact that \(\rho _N\rightarrow \rho \) for any \(\sigma \ne \sigma _c\), and, finally,

$$\begin{aligned}| \rho (\sigma _c+\delta ) - {{\tilde{r}}}_c|\le \varepsilon _\delta ^{(3)} \end{aligned}$$

follows from the fact that \(\lim _{\sigma \rightarrow \sigma _c^+} \rho (\sigma )={{\tilde{r}}}_c\). \(\square \)

Returning to the statement of Theorem 3.3, to prove the well-posedness of the problem \((\text{ P2})\), simply note that, according to Lemma A.2, \(f_N(\sigma )=k\sigma -\rho _N(\sigma )\) is continuous, \(\rho _N(\sigma )\) is bounded, and, finally, \(\lim _{\sigma \rightarrow \sigma _{\max }} f_N(\sigma )=M\), with \(M>0\) large enough. The objective function is therefore continuous and bounded from below; hence, it admits a minimum and a minimum point \(\sigma ^*_N\).

For the second part of the statement, we use the tool of the \(\Gamma \)-convergence which, under suitable conditions, implies convergence of minimum values and minimizers, which in our case are also minimum points.

It is easy to see that the sequence of objective functions \((f_N)_N\) defined on \({\mathbb {R}}^+\) is equi-mildly coercive.Footnote 20 Moreover, given the function \({f}(\sigma )=k\sigma -\rho (\sigma )\), we consider its lower-semicontinuous envelope sc f (see Definition B.4), that is, for every \(\sigma \in [\sigma _{\min }, \sigma _{\max }]\),

$$\begin{aligned} sc{f}(\sigma )={\left\{ \begin{array}{ll} k\sigma -\rho (\sigma ) &{} \text {if} \ \sigma \ne {\sigma }_c\\ k{\sigma }_c-{\tilde{r}}_c &{} \text {if} \ \sigma = {\sigma }_c \end{array}\right. } \end{aligned}$$
(19)

with \({\tilde{r}}_c\) as in (4).

As seen in Lemma A.5, \(\rho _N(\sigma )\) converges uniformly to \(\rho (\sigma )\) for any \(\sigma \in [\sigma _{\min }, \sigma _c-\delta ]\cup [\sigma _c+\delta , \sigma _{\max }]\) for every \(\delta >0\) (see footnote 20 (Page 24)). It is now immediate to derive that \(f_N(\sigma ) \rightarrow f(\sigma )\) uniformly on the open set \(U_{\delta }:= (\sigma _{\min }, \sigma _c-\delta )\, \cup \, (\sigma _c+\delta , \sigma _{\max })\, \,\forall \, \delta >0\). As a consequence, it converges uniformly on the set \(U:=\bigcup _{\delta >0}\,U_{\delta }=(\sigma _{\min }, \sigma _c)\, \cup \, (\sigma _c, \sigma _{\max })\). Then, by applying Proposition B.5, we obtain that

$$\begin{aligned} \Gamma -\lim _Nf_N(\sigma )=sc f(\sigma )=f(\sigma ) \ \ \text {on} \ \ U. \end{aligned}$$

It remains to study the value of \(\Gamma \)-\(\lim _Nf_N(\sigma )\) for the values of \(\sigma \) at the frontier of U, namely, \(\sigma \in \{\sigma _{\min }, \sigma _c, \sigma _{\max }\}\). We now show that at \(\sigma =\sigma _c\),

$$\begin{aligned} \Gamma -\lim _Nf_N(\sigma _c)=scf(\sigma _c). \end{aligned}$$

Given that the \(\Gamma \)-limit of a sequence of functions, if it exists, is necessarily lower-semicontinuous [see, e.g., Proposition 1.28 in Braides (2002)] and it is unique, \(scf(\sigma _c)\), as in (19), is a good candidate to be the \(\Gamma \)-limit value we seek. In the following, we prove that \(scf(\sigma _c)\) satisfies both conditions (26) and (28) of the definition of \(\Gamma \)-limit (see Definition B.2), hence the \(\Gamma \)-\(\lim _Nf_N(\sigma _c)\) exists and is unique.

  1. 1)

    (liminf inequality). By way of contradiction, suppose that there exists a sequence \(({{\tilde{\sigma }}}_N)_N\), \(\tilde{\sigma }_N\rightarrow \sigma _c\) such that \(scf(\sigma _c)>\liminf _Nf_N(\tilde{\sigma }_N)\). On the other hand, since for every \(\ell \), \(f_\ell \) is continuous, then \(f_\ell (\sigma _c)\le \liminf _j f_\ell (\sigma _j)\) for every sequence \((\sigma _j)_j\) converging to \(\sigma _c\). Such a condition holds, in particular, for the sequence \(({{\tilde{\sigma }}}_N)\) identified above, and letting \(\ell \) be large; therefore, we can write

    $$\begin{aligned} \lim _\ell f_\ell (\sigma _c)\le \liminf _N \lim _\ell f_\ell ({{\tilde{\sigma }}}_N).\end{aligned}$$
    (20)

    Applying Definition B.1 to the right-hand side of (20), we get

    $$\begin{aligned} \liminf _N \lim _\ell f_\ell (\tilde{\sigma }_N)=\inf \left\{ \lim _N\lim _\ell f_\ell ({{\tilde{\sigma }}}_N): \tilde{\sigma }_N\in {\mathbb {R}}, {{\tilde{\sigma }}}_N\rightarrow \sigma _c, \exists \, \lim _N\lim _\ell f_\ell ({{\tilde{\sigma }}}_N)\right\} .\nonumber \\ \end{aligned}$$
    (21)

    We can now take \(\ell \) growing as N, and by (21) it follows

    $$\begin{aligned} \begin{aligned} \liminf _N \lim _N f_N ({{\tilde{\sigma }}}_N)&=\inf \left\{ \lim _N f_N({{\tilde{\sigma }}}_N): {{\tilde{\sigma }}}_N\in {\mathbb {R}}, {{\tilde{\sigma }}}_N\rightarrow \sigma _c, \exists \, \lim _N f_N({{\tilde{\sigma }}}_N)\right\} \\&=\liminf _N f_N ({{\tilde{\sigma }}}_N), \end{aligned} \end{aligned}$$

    where the last equality is precisely Definition B.1. Then, for \(\ell \) growing as N, the inequality (20) becomes

    $$\begin{aligned} \lim _N f_N(\sigma _c)\le \liminf _N f_N ({{\tilde{\sigma }}}_N). \end{aligned}$$

    Therefore,

    $$\begin{aligned} scf(\sigma _c)>\liminf _Nf_N({{\tilde{\sigma }}}_N) \ge \lim _N f_N(\sigma _c). \end{aligned}$$

    As a consequence, for N large enough,

    $$\begin{aligned} scf(\sigma _c)> f_N(\sigma _c). \end{aligned}$$

    Substituting the definition of \(scf(\sigma _c)\) and \(f_N(\sigma _c)\) in the previous inequality, it follows that

    $$\begin{aligned} k\sigma _c-{{\tilde{r}}}_c>k\sigma _c-\rho _N(\sigma _c),\end{aligned}$$

    or, equivalently, \({\tilde{r}}_c < \rho _N(\sigma _c)\). This latter inequality is a contradiction due to Lemma A.6, hence the liminf inequality is satisfied.

  2. 2)

    (existence of a recovery sequence). We choose as a converging sequence \(\sigma _N=\sigma _c+{1}/{N}\) and show that

    $$\begin{aligned} scf(\sigma _c)=\lim _Nf_N(\sigma _N). \end{aligned}$$
    (22)

    For (22) to be fulfilled, it is sufficient to prove that \({\tilde{r}}_c=\lim _N\rho _N(\sigma _N)\). To this end, we note that

    $$\begin{aligned} \vert \rho _N(\sigma _N)-{\tilde{r}}_c\vert \le \vert \rho _N(\sigma _N)-\rho (\sigma _N)\vert + \vert \rho (\sigma _N)-{\tilde{r}}_c\vert <\varepsilon . \end{aligned}$$

    This result follows from the uniform convergence of \(\rho _N\) to \(\rho \) in \((\sigma _c, \sigma _{\max }]\) (inferred by the arguments following Eq. (19)) and due to the fact that \(\lim _{N}\rho (\sigma _N)={{\tilde{r}}}_c\). Hence, the second requirement is also satisfied.

Concerning the \(\Gamma \)-\(\lim _Nf_N(\sigma )\) for \(\sigma \in \{\sigma _{\min },\sigma _{\max }\}\), we can proceed as in the case of \(\sigma =\sigma _c\). Indeed, by using both the regularity of \(\rho _N\) and \(\rho \) and the convergence of \(\rho _N\) to \(\rho \), we get that \(\Gamma \)-\(\lim _Nf_N(\sigma )=scf(\sigma )\) for \(\sigma \in \{\sigma _{\min },\sigma _{\max }\}\).

In conclusion, we have ensured that \(\Gamma \)-\(\lim _Nf_N(\sigma )= scf(\sigma )\) for every \(\sigma \in [\sigma _{\min },\sigma _{\max }]\). Renaming \(sc{f}(\sigma )\) in (19) as \(f_\infty (\sigma )\), we have that

$$\begin{aligned} {f}_N(\sigma )\overset{\Gamma }{\rightarrow } f_{\infty }(\sigma ). \end{aligned}$$

Then, relying on Theorem B.8, we get in our case

$$\begin{aligned} \exists \min _{{\mathbb {R}}^+} f_{\infty }(\sigma ) \, =\, \lim _{N\rightarrow + \infty }\inf _{{\mathbb {R}}^+}{f}_N(\sigma ) \,=\, \lim _{N\rightarrow + \infty }\min _{{\mathbb {R}}^+}{f}_N(\sigma ).\end{aligned}$$
(23)

Moreover, since all functions \({f}_N\) admit a minimum point \({\sigma }_N^*\) (which exists by virtue of Lemma A.2 ), then, up to subsequences, \({\sigma }_N^*\) converges to a minimum point of \(f_{\infty }\). According to (19), the only minimum point of \(f_{\infty }\) is \({\sigma }_c\). Hence, by (23), it follows that

$$\begin{aligned} \lim _{N\rightarrow +\infty } k {\sigma }_N^*-\rho _N({\sigma }_N^*) =k{\sigma }_c-{{\tilde{r}}}_c. \end{aligned}$$
(24)

Accordingly,

$$\begin{aligned} {\sigma }_N^* \rightarrow {\sigma }_c. \end{aligned}$$

\(\square \)

Some basics of \(\Gamma \)-convergence

In this section, we introduce some abstract notions and results on \(\Gamma \)-convergence. We start by recalling the concepts of lower and upper limits and of lower-semicontinuous functions to introduce the definition of the \(\Gamma \)-limit. We also define the lower-semicontinuous envelope of a function and provide an example of computation of the \(\Gamma \)-limit by noting how this can be different from the pointwise limit. Finally, we show that, under suitable conditions, \(\Gamma \)-convergence implies convergence of minimum values and minimizers.

From now on, unless otherwise specified, X will be a metric space equipped with the metric d.

Definition B.1

Let \(f:X\rightarrow {\overline{{\mathbb {R}}}}\). We define the lower limit (liminf for short) of f at x as

$$\begin{aligned} \begin{aligned} \liminf _{y\rightarrow x} f(y)&= \inf \{\liminf _jf(x_j): x_j \in X, x_j\rightarrow x\}\\&=\inf \{\lim _jf(x_j):x_j \in X, x_j\rightarrow x, \exists \lim _jf(x_j)\}, \end{aligned} \end{aligned}$$

and the upper limit (limsup for short) of f at x as

$$\begin{aligned} \begin{aligned} \limsup _{y\rightarrow x} f(y)&= \sup \{\limsup _jf(x_j): x_j \in X, x_j\rightarrow x\}\\&=\sup \{\lim _jf(x_j):x_j \in X, x_j\rightarrow x, \exists \lim _jf(x_j)\}. \end{aligned} \end{aligned}$$

Definition B.2

A function \(f:X\rightarrow {\overline{{\mathbb {R}}}}\) is lower-semicontinuous at \(x \in X\) if, for every sequence \((x_j)\) converging to x, we have

$$\begin{aligned} f(x)\le \lim \inf _jf(x_j), \end{aligned}$$
(25)

or, in other words, \(f(x)=\min \{\liminf _jf(x_j):x_j\rightarrow x\}.\) We will say that f is lower-semicontinuous on X if it is l.s.c. at all \(x \in X\).

Definition B.3

(\(\Gamma \)-convergence) A sequence \(f_j:X\rightarrow {\overline{{\mathbb {R}}}}\) \(\Gamma \)-converges in X to \(f_\infty :X\rightarrow {\overline{{\mathbb {R}}}}\) if for all \(x \in X\) we have

  1. (i)

    (liminf inequality) for every sequence \((x_j)\) converging to x

    $$\begin{aligned} f_\infty (x)\le \liminf _jf_j(x_j); \end{aligned}$$
    (26)
  2. (ii)

    (limsup inequality) there exists a sequence \((x_j)\) converging to x such that

    $$\begin{aligned} f_\infty (x)\ge \limsup _jf_j(x_j). \end{aligned}$$
    (27)

    The function \(f_\infty \) is called the \(\Gamma \)-limit of \((f_j)\), and we write \(f_\infty =\Gamma \)\(\lim _jf_j\).

    Condition (ii) can be substituted by the following:

    1. (ii’)

      (existence of a recovery sequence) there exists a sequence \((x_j)\) converging to x such that

      $$\begin{aligned} f_\infty (x)= \limsup _jf_j(x_j). \end{aligned}$$
      (28)

Definition B.4

Let \(f:X\rightarrow {\overline{{\mathbb {R}}}}\) be a function. Its lower-semicontinuous envelope scf is the greatest lower-semicontinuous function not greater than f, that is, for every \(x \in X\)

$$\begin{aligned} scf(x)=\sup \{g(x): g\ l.s.c.,\, g\le f\}. \end{aligned}$$

Proposition B.5

If \(f_j \rightarrow f\) uniformly on an open set U, then

$$\begin{aligned} \Gamma -\lim _j f_j=scf \qquad \text {on U}. \end{aligned}$$

Proof

See Remark 1.38 of Braides (2002).

Below, we report an example that highlights the different roles of the limsup and liminf inequalities. It is also useful to visualize in a simple case of a sequence of real functions the difference between the classical pointwise (or uniform) limit and the \(\Gamma \)-limit.

Example B.6

Let \(f_j(t)\) be a sequence of functions, where

$$\begin{aligned} f_j(t)= {\left\{ \begin{array}{ll} \pm 1 &{} \text {if}\quad t=\pm {1}/{j},\\ 0 &{} \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$

Note that \(f_j\rightarrow 0\) pointwise, but \(\Gamma \)-\(\lim _j f_j=f_\infty \), where

$$\begin{aligned} f_\infty (t)= {\left\{ \begin{array}{ll} -1 &{} \text {if}\quad t=0,\\ 0 &{} \text {if}\quad t\ne 0. \end{array}\right. } \end{aligned}$$

Indeed, the sequence \(f_j\) converges pointwise (and hence also \(\Gamma \)-converges) to 0 in \({\mathbb {R}}\setminus \{0\}\), while the optimal sequence for \(t=0\) is \(t_j=-1/j\), for which \(f_j(t_j)=-1.\)

Definition B.7

(Coerciveness conditions) A function \(f:X\rightarrow {\overline{{\mathbb {R}}}}\) is mildly coercive if there exists a nonempty compact set \(K\subset X\) such that \(\inf _X f=\inf _K f\). A sequence \((f_j)\) is equi-mildly coercive if there exists a nonempty compact set \(K \subset X\) such that \(\inf _X f_j=\inf _K f_j\) for all j.

Theorem B.8

Let (Xd) be a metric space, let \((f_j)\) be a sequence of equi-mildly coercive functions on X, and let \(f_\infty =\Gamma \)\(\lim _j f_j\); then

$$\begin{aligned} \exists \min _{X}f_{\infty }(x) = \lim _{j\rightarrow + \infty }\inf _{X}{f}_j(x). \end{aligned}$$
(29)

Moreover, if all functions \({f}_j\) admit a minimizer \(x_j^*\), then, up to subsequences, \(x_j^*\) converges to a minimum point of \(f_{\infty }\).

Proof

See Theorem 1.21 and Remark 1.22 in Braides (2002).

Continuity of \(\sigma \mapsto \rho (\sigma )\)

Given \( X\equiv [0,1]\) and \(\Sigma \equiv [\sigma _{\min }, \sigma _{\max }]\), define \(G: X\times \Sigma \rightarrow {\mathbb {R}}\) as \(G(x,\sigma )=F(x;\sigma )-x\), where F has been introduced in Sect. 2. Then, for any \(\sigma \in \Sigma \), we can define \(D_\sigma =\{x\in X:G(x,\sigma )\le 0\}\subseteq X\). With these new notations, \(\rho (\sigma )\) as defined in Lemma 2.1 can be rephrased equivalently as

$$\begin{aligned} \rho (\sigma )=\min _{x\in D_\sigma } x, \end{aligned}$$

because \(F(0; \sigma ) > 0\) for every \(\sigma \in [\sigma _{\min }, \sigma _{\max }]\). Note that the relation \(\sigma \mapsto D_\sigma \) is formally a correspondence, mapping each \(\sigma \in \Sigma \) into a subset of X. We now introduce the definition of upper(lower)-semicontinuity for a correspondence as defined in Sundaram (1996) and prove a technical lemma.

Definition C.1

A correspondence \(\Phi : \Sigma \rightarrow P(X)\), where P(X) denotes the power set of X, is said

  1. i)

    upper-semicontinuous (usc) at \(\sigma \) if, for all open sets V such that \(\Phi (\sigma ) \subset V\), there exists an open set U containing \(\sigma \), such that \(\sigma '\in U\) implies \(\Phi (\sigma ')\subset V\).

    We say that \(\Phi \) is usc on \(S\subseteq \Sigma \) if it is usc for each \(\sigma \in S\);

  2. ii)

    lower-semicontinuous (lsc) at \(\sigma \) if, for all open sets V such that \(V\cap \Phi (\sigma ) \ne \emptyset \), there exists an open set U containing \(\sigma \), such that \(\sigma '\in U\) implies \(V\cap \Phi (\sigma ')\ne \emptyset \).

    We say that \(\Phi \) is lsc on \(S\subseteq \Sigma \) if it is lsc for each \(\sigma \in S\).

Lemma C.2

The correspondence \(\sigma \mapsto D_\sigma \) is compact valued; moreover, it is both upper and lower-semicontinuous on the intervals \([\sigma _{\min },\sigma _c]\) and \((\sigma _c,\sigma _{\max }]\). Therefore, on the same intervals, it is continuous.

Proof

Compactness is easy to see, since \(D_\sigma \) is a closed subset of X which is bounded. The closeness is due to the fact that \(D_\sigma \) is the preimage of a closed set through a continuous function. We now prove upper-semicontinuity on \((\sigma _{\min },\sigma _c)\). To this end, fix \(\sigma \in (\sigma _{\min },\sigma _c)\) and take any open \(V\in {\mathbb {R}}\) containing \(D_\sigma \). Now define \(U=(\sigma -\delta ,\ \sigma +\delta )\subset (\sigma _{\min },\sigma _c)\), for \(\delta >0\), and consider \(\sigma '\) such that \(\sigma '\in U\). By way of contradiction, suppose that \(D_{\sigma '}\nsubseteq V\); put differently, \(D_{\sigma '}\cap V^c\ne \emptyset \). Then there exists \(x\in X\) such that \(x\in V^c\) and \(G(x,\sigma ')\le 0\). Since G is not constant and \(D_{\sigma '}\) is not a singleton, we can assume \(G(x,\sigma ')< 0\). On the other hand, \(x\in V^c\) implies \(x\notin V\), hence \(x\notin D_\sigma \). Therefore, \(G(x, \sigma )>0\). As a consequence, we can find \(\varepsilon \) small enough such that \(|G(x,\sigma )-G(x, \sigma ')|>\varepsilon \); this latter inequality contradicts the continuity of G in \(\sigma \), since, by assumption, \(\sigma ' \in U=(\sigma -\delta ,\ \sigma +\delta )\). To prove usc for \(\sigma =\sigma _{\min }\), we use the same argument, where now \(U=(\sigma _{\min }, \sigma _{\min }+\delta )\), \(\delta >0\). Similarly, for \(\sigma =\sigma _c\), we can take \(U=(\sigma _c-\delta , \sigma _c)\), \(\delta >0\). The usc on the open interval \((\sigma _c,\sigma _{\max })\) is proved using the same argument, as well as the usc for \(\sigma =\sigma _{\max }\), considering \(U=(\sigma _{\max } -\delta , \sigma _{\max }), \ \delta >0\).

To prove the lower-semicontinuity on the open interval \(\sigma <\sigma _c\), we consider V such that \(V\cap D_\sigma \ne \emptyset \). Consider, as before, \(U=(\sigma -\delta ,\ \sigma +\delta )\), for \(\delta >0\). Take any \(\sigma '\in U\). By way of contradiction, suppose that \(V\cap D_{\sigma '}=\emptyset \). This means that there exists at least one \(x\in V\cap D_\sigma \) that does not belong to \(D_{\sigma '}\). Since \(\sigma <\sigma _c\), \(D_\sigma \) is not a singleton and therefore we can assume \(G(x,\sigma )<0\); moreover, \(G(x,\sigma ')>0\). As before, we contradict the continuity of G. Consider now \(\sigma =\sigma _{\min }\), the argument holds for \(U=(\sigma _{\min }, \sigma _{\min }+\delta )\), \(\delta >0\).

Concerning \(\sigma =\sigma _c\), by Lemma A.2, we know that there exist exactly two solutions to the equation \(G(x,\sigma _c)=0\); the smallest, \(x^l\), is such that \(x^l<\mu \), whereas the largest one is \(x^h>\mu \). Moreover, for all \(\sigma '\in (\sigma _c-\delta , \sigma _c)\), \(G(x^l,\sigma ')<0\) and therefore \(x^l\in V\cap \Phi (\sigma ')\) for any V such that \(x^l \in V\). Therefore, for such \(\sigma '\), the definition of lsc is guaranteed. Finally, lsc on \((\sigma _c, \sigma _{\max })\) is proved using the same argument as for the open set \((\sigma _{\min },\sigma _c)\), while the lsc at \(\sigma =\sigma _{\max }\) is obtained by considering \(U=(\sigma _{\max }-\delta , \sigma _{\max })\).

To provide evidence that the correspondence \(\sigma \mapsto D_\sigma \) is not continuous at \(\sigma _c\) (from the right), we show that for \(\sigma =\sigma _c\) the lsc fails when considering the open interval \(U=(\sigma _c,\sigma _c+\delta )\). As stated, in case \(\sigma =\sigma _c\), there exist two solutions to the equation \(G(x,\sigma _c)=0\) such that \(x^l<\mu <x^h\). Consider now V such that \(x^l\in V\) but \(V\cap [\mu ,1]=\emptyset \). In this way, \(V\cap D_{\sigma _c}\ne \emptyset \). Now define \(U=(\sigma _c,\sigma _c+\delta )\), for \(\delta >0\) and take any \(\sigma '\in U\). In this case, \(V\cap D_{\sigma '}=\emptyset \).Footnote 21 This contradicts lower-semicontinuity. \(\square \)

Proposition C.2

The map \(\sigma \mapsto \rho (\sigma )\) is continuous on \([\sigma _{\min },\sigma _c]\) and on \((\sigma _c,\sigma _{\max }]\).

Proposition C.2 is now a straightforward corollary of Theorem 9.14 of Sundaram (1996). The assumptions of that theorem are that the target function is continuous and that the constraint, defined through the correspondence \(\sigma \mapsto D_\sigma \), is compact valued and continuous. Our target function is clearly continuous and the assumptions on \(\sigma \mapsto D_\sigma \) are ensured by Corollary C.2. For convenience, we report below the statement of Theorem 9.14 taken from Sundaram (1996), adapting it to our notation. Note that \(f(x,\sigma )\) and \(f^*(\sigma )\), as in the statement of such a theorem, correspond, respectively, to the identity function \(x\mapsto x\) related to the first component and to our map \(\rho (\sigma )\).

Theorem C.3

(Theorem 9.14, Sundaram) Let \(f:X \times \Sigma \rightarrow {\mathbb {R}}\) be a continuous function and let \(D_\sigma :\Sigma \rightarrow P(X)\) be a compact-valued, continuous correspondence. Let \(f^*:\Sigma \rightarrow {\mathbb {R}}\) be defined by

$$\begin{aligned} f^*(\sigma )=\min \{ f(x,\sigma ) | x\in D_\sigma \}. \end{aligned}$$

Then \(f^*\) is a continuous function on \(\Sigma \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maggistro, R., Pellizzari, P., Sartori, E. et al. Dangerous tangents: an application of \(\Gamma \)-convergence to the control of dynamical systems. Decisions Econ Finan 45, 451–480 (2022). https://doi.org/10.1007/s10203-022-00372-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10203-022-00372-z

Keywords

Mathematics Subject Classification

JEL Classification:

Navigation