Introduction

This paper presents a verification and an extension of the study entitled “Uncertain outcomes and climate change policy” by Pindyck, J. Environ. Econ. Manag. (Pindyck 2012), referred to hereafter as P12. Pindyck incorporates the distribution of (uncertain) temperature change and the distribution of the (uncertain) impact of this change on the growth of consumption, and computes willingness to pay (WTP), i.e. “the fraction of consumption ...  that society would be willing to sacrifice, now and throughout the future, to ensure that any increase in temperature at a specific horizon H is limited to \(\tau \).” These fractions are typically below 2%; P12 states that this is consistent with the adoption of a moderate abatement policy.

P12 can be interpreted as a very simplified integrated assessment model (IAM) for the analysis of climate change. Such models can be divided into two main subcategories (Weyant 2017, p. 117): detailed process IAMs (DP IAMs), which are “more disaggregated and seek to provide projections of climate change impacts at detailed regional and sectoral levels,” and benefit–cost IAMs (BC IAMs), which are “more aggregated representations of climate change mitigation costs and aggregate impacts by sector and region into a single economic metric.” P12 can be considered a very special case of BC IAMs. After all, it focuses on the benefits that may be gained from achieving a certain level of climate change mitigation, disregarding the precarious issue of properly considering costs. Indeed, according to Weyant (2017), various studies (e.g. Weyant , 1999; Weyant , 2004; De la Chesnaye and Weyant , 2006; Clarke et al. , 2014) provide widely different estimates of the efforts required to stabilize greenhouse gas (GHG) emissions this century; these range from 0.1 to 10% of the gross world product (GWP) per year.

Currently, there is an ongoing debate in the academic community concerning the usefulness of IAMs (Metcalf and Stock 2017; Pindyck 2017; Weyant 2017; Bretschger and Pittel 2020). This debate is motivated by the underlying uncertainties surrounding this kind of model. These uncertainties include correct structural assumptions; the distribution of the values of related parameters; and ethical issues related to the proper way to consider the well-being of future generations (i.e. which discount rate to apply) and of poor countries, which may be substantially ignored by measures related to worldwide consumption or production. These ethical considerations are all but details: Pindyck (2017) remarks that the different estimates of the social cost of carbon (SCC) yielded by the three most commonly used models—the Dynamic Integrated Climate and Economy (DICE) model (Nordhaus 2014), the Climate Framework for Uncertainty, Negotiation and Distribution (FUND) model (Tol 2002) and the Policy Analysis of the Greenhouse Effect (PAGE) model (Hope 2006)—essentially boil down to a different choice of the discount rate. Loosely speaking, ceteris paribus it is possible to obtain values for the SCC differing by two orders of magnitude by simply applying an “ethical” discount rate of 0% rather than a “market” discount rate. Golosov et al. (2014) proposes a simple expression for the damage caused by emissions as a function of the discount rate, the elasticity of damage, and carbon decay in the atmosphere.

P12 appears to be an influential paper that addresses difficult questions with crystal-clear thinking and terse prose. The article has been cited frequently (48 times on Scopus and 158 on Google scholar)Footnote 1 in its relatively short lifespan. Most of the key arguments in P12 are crucially based on figures computed from the model; data and estimates are built on the IPCC’s Fourth Assessment Report (IPCC 2007a, b, c). We believe that a framework such as that of P12, which is characterized by elegant, albeit sometimes ad hoc assumptions, could be continuously updated in terms of its calibration and formulation. In this way, the framework could cleverly summarize a vast amount of knowledge on climate change, producing outcomes that are easy to communicate to the general public (i.e. “We should invest \(X\%\) of our production to prevent damage from climate change”). For example, even though P12 addresses the global effects of temperature change, the theoretical model could easily be extended to model the regional-level or even country-level effects of global warming. After all, more recent papers (Dell et al. 2012; Bansal and Ochoa 2012; Burke et al. 2015; Moore and Diaz 2015) provide evidence that, ceteris paribus, a one Celsius degree increase in temperature would have more negative effects in countries with higher average temperature levels, and may even be beneficial in low-temperature countries. The computations involved in the model are technically demanding. Basically, they require the evaluation of numerous three-dimensional non-trivial integrals (over a long time span, over an estimated distribution of temperature changes, over an estimated distribution of an impact coefficient). In some places, replication was facilitated by the working paper by Pindyck (2009), hereafter referred to as P09. It is useful to be able to read two “versions” of the same work, and to access, whenever necessary, an alternative wording of the same procedure or description. Even so, we were unable to replicate a sizable number of computations.

Replication is of paramount importance in science, and lies at the very heart of what differentiates science from cheap talk and non-scientific arguments. There is a growing awareness of the need for more replication studies; too many scholars have sadly admitted faltering or failed attempts to reproduce other researchers’ work (Baker 2016). Regardless of how it is defined, we believe that replication is very important for another reason: boldly put, we believe that replicating a paper is the only way to (fully) understand it. This may also be true for theoretical studies (say, reworking all the proofs) but we have no doubt that this is required to master numerical, empirical or simulation-based studies. It often takes an inordinate amount of effort, and despite the fact that the verification is never published as a standalone paper, the rewards are great: in fact, one of the authors of this article believes that most, if not everything, of what he knows comes from the efforts spent struggling with the details of papers to be replicated.

For additional clarity, we will no longer use the word “replication” in what follows for reasons described convincingly in Clemens (2017): the term is used in the literature to refer to distinct procedures, and there is no agreed standard on its precise meaning. In this paper, we present a verification and an extension of P12, as defined in Clemens (2017). Verification seeks to reproduce the results of the original paper using the same specification of the model and the same data. Therefore, verification should not produce discrepant results unless there are plain errors, numerical instabilities or fraud in the original work. An extension is a robustness check of the original work, seeking to explore the stability of the results generated by the original work using different data. More specifically, an extension runs the original model using different data. Hence, if we expect the results of a verification to be the same as those in the original work, there is no reason to expect the same results after an extension.

The first part involves the verification and we seek to reproduce the results of P12. Neither P12 nor P09 explicitly states the software used for computation. In a personal communication, Robert Pindyck, however, stated that MATLAB was used, and he provided us with some code. In what follows, we use R (R Core Team 2020) a popular and reliable free software platform for statistical and numerical computing and data visualization. While we stress that some scholars, such as Anderson et al. (2008), appear to require that a “verification” should use the same software (and perhaps the same hardware), we believe that the use of MATLAB, R or any other professionally trusted software (e.g. Octave, Mathematica) should not alter the substantial results of a study. In other words, if different results are obtained using different software programs, the case is indeed worth studying, as undertaken by McCullough and Vinod (2003).Footnote 2 Subsequently, we move forward and extend the model, re-estimating it using more recent data. In particular, we use data from the IPCC’s Fifth Assessment Report (IPCC 2014), whereas P12 was based on the previous IPCC’s Fourth Assessment Report (IPCC 2007a, b, c).

We obtained two main results from our verification. First, only some of P12’s results can be reproduced, and significant discrepancies exist in several cases. While some of the differences have little effect on the economic meaning or interpretation, there are situations where our estimates of WTP are 20- to 60-fold smaller than the ones in P12.

The second problematic outcome of our verification is more profound: in some of the cases listed in P12, the results for WTP are extremely sensitive to choices of numerical parameters used in the computation, but that have no relationship otherwise with the model. For instance, even though some integrals are naturally defined on the real (half) line, integration routines require setting an upper limit for the domain: this should be intuitively irrelevant, but it turns out that it may introduce non-obvious and large biases. A more detailed discussion of this matter is deferred to “Discussion” section. However, our experience suggests that it may be impossible to select the “right” parameters that lead to the “correct” results. As a consequence, the use of the default options provided by the numerical software may cause real errors, especially if there is no article, such as P12, acting as a target for fine-tuning.

Regarding the extension, we change the data source and “redo the paper” to give an idea of how policy suggestions would vary based on two subsequent IPCC reports. Essentially, more recent data support a temperature change distribution over next century that is higher on average, but more concentrated. The effects on WTP are, therefore, twofold: the higher mean would increase our WTP, but since extreme events are less likely to occur, the lower risk would simultaneously tend to reduce WTP. The overall effect is a slight increase in WTP for strong mitigation and a slight decrease in WTP in the case of moderate mitigation.

The paper is organized as follows. The next section briefly summarizes the model contained in Pindyck’s work and describes the major conclusions of P12; we then present our verification strategy and explain the functioning of the routines used in the verification. P12 is a rich paper with plenty of numerical results and robustness tests or discussions. We reproduce a vast array of outcomes, including diagrams, key tables and robustness checks of the original paper. The subsequent section is devoted to the extension. Then some of the most relevant results of the previous parts are discussed in detail. We finally conclude with final remarks and suggestions for future research.

Theory

This section describes the model presented in P12, which builds on the work of Weitzman (2009). It is assumed that the global temperature increase \(T_H\) at horizon H is distributed as a three-parameter displaced gamma density of the form:

$$\begin{aligned} T_H\sim f(x)=f(x;r=r_T,\lambda =\lambda _T,\theta =\theta _T)=\frac{\lambda ^r}{\varGamma (r)}(x-\theta )^{r-1}e^{-\lambda (x-\theta )},\quad x\ge \theta , \end{aligned}$$
(1)

where \(\theta \) is the displacement parameter and \(\varGamma (r)=\int _0^\infty s^{r-1}\exp (-s)ds\) is the Gamma function. If \(T_H\) is the increase in temperature after H years, the increase at time t, \(T_t\) evolves according to

$$\begin{aligned} T_t=2T_H[1-(1/2)^{t/H}], \end{aligned}$$
(2)

so that, in particular, \(T_t\rightarrow 2T_H\) as \(t\rightarrow \infty \).

P12 argues that a model that incorporates warming effects on the growth rate of GDP, rather than the level of GDP, is more appropriate. In this sense, it differs from standard formulations adopted in the literature; for example, Weitzman (2010) assumes a welfare-equivalent consumption-loss multiplier for the level of GDP (\(L(T_t)=\exp (-\beta T_t^2)\)) and, hence, GDP (or consumption) at t is \(L(T_t)\mathrm {GDP}_t\), where \(\mathrm {GDP}_t\) is the “would have been” GDP at t in the absence of warming. Conversely, P12 follows Dell et al. (2009), and assumes that, in the absence of warming, GDP and consumption would grow at constant rate \(g_0\), and that a positive value for \(T_t\) leads to the following welfare-equivalent loss in the instantaneous growth rate:

$$\begin{aligned} g_t=g_0-\gamma T_t. \end{aligned}$$
(3)

Since this is a replication study, we keep to this formulation, even though there is no consensus in the literature on whether damaging effects of temperature impact the level or the growth rate of GDP. While the standard approach is to assume a level impact on output, several papers provide empirical evidence that climate change affects the growth rate instead, see Dell et al. (2012), Bansal and Ochoa (2012), Moore and Diaz (2015) and Karydas and Xepapadeas (2019). A convincing analysis of this issue is contained in the recent study by Hambel et al. (2021), in which a fully dynamic stochastic model allows the two approaches to be compared.

The reduced form of Eq. (3) is a crude simplification to incorporate climate damage, putting entirely aside the channels or mechanisms through which the rise in temperature affects consumption. Other papers, in different contexts, appear to follow the same path: in Park (2016), say, the difference of log payrolls depends on the number of days when temperature is above \(35^\circ C\), and again (among many possible drivers, such as agricultural output, productivity, reduced labor supply...), no mechanism is similarly singled out. Other works are more insightful in this respect, see Fankhauser and Tol (2005) or Donadelli et al. (2017). In the latter work, the stochastic productivity rate is noisily fed with, and negatively affected by, the same shocks affecting temperature. As a consequence, transmission channels are more evident, and different specifications can be used to trace the long- and short-term effects of temperature on growth rates. The recent work by Bretschger and Pattakou (2019), using a model with endogenous production and capital accumulation, also stresses how the selected damage function has a significant impact on the growth rate of the economy (here, as in P12, production is completely consumed), through reduced accumulation and capital depreciation due to increasing pollution and rising temperatures. A number of scenarios and graphs in Bretschger and Pattakou (2019) or in Golosov et al. (2014) provide numerical and visual support to a linear decrement in the equilibrium growth rate.

Combining (2) and (3), the path of the growth rate turns out to be

$$\begin{aligned} g_t=g_0-2\gamma T_H[1-(1/2)^{t/H}]. \end{aligned}$$
(4)

Hence, consumption (or GDP) \(C_t=C_0 \exp (\int _0^t g(s)\mathrm{d}s)\) (with warming) can be computed as

$$\begin{aligned} C_0 \exp \bigg (-\frac{2 \gamma H T_H}{\ln (1/2)}+(g_0-2 \gamma T_H)t+\frac{2 \gamma H T_H}{\ln (1/2)} (1/2)^{t/H} \bigg ), \end{aligned}$$
(5)

for any t. Normalizing consumption \(C_0\) at 1 and equating final consumption (5) at \(t=H\) with what would be obtained using the above-mentioned welfare-equivalent loss function on levels of GDP, P12 obtains

$$\begin{aligned}&\exp \bigg (-\frac{2 \gamma H T_H}{\ln (1/2)}+(g_0-2 \gamma T_H)H+\frac{2 \gamma H T_H}{\ln (1/2)} (1/2)^{H/H} \bigg )\nonumber \\&\quad = L(T_H)\exp (g_0H)=\exp (g_0H-\beta T_H^2), \end{aligned}$$
(6)

and, subsequently, the relationship between \(\beta \) and \(\gamma \):

$$\begin{aligned} \gamma =1.79\frac{\beta T_H}{H}. \end{aligned}$$
(7)

In other words, given the temperature increase at horizon H and the assumed deterministic evolution of temperature, \(\gamma \) is such that, at time H, consumption at a reduced growth rate is identical to that which would be obtained using a more standard loss function on levels of GDP, see Equation (6). Under these strong assumptions, Relation (7) is produced, allowing us to switch from the effects on levels of GDP to the effects on growth rates of GDP.

P12 uses estimates of economic losses from IPCC (2007c) to infer mean and confidence intervals for \(\beta \) and then, through (7), for \(\gamma \). The mean and the confidence interval are finally used to fit a displaced gamma density for the random variable \(\gamma \), \(f_{\gamma }(y)=f_{\gamma }(y;r_\gamma ,\lambda _\gamma ,\theta _\gamma )\), appearing in (3).

P12 defines the social utility function

$$\begin{aligned} U(C_t)=\frac{C_t^{1-\eta }}{1-\eta }, \end{aligned}$$
(8)

where \(\eta \) is the index of relative risk aversion of society. It is convenient in what follows to set \(u(C_t)=C_t^{1-\eta }\) so that \(U(C_t)=\frac{1}{1-\eta }u(C_t)\).

The willingness to pay \(w^*(\tau )\) is the “fraction of consumption – now and throughout the future – society would sacrifice to ensure that an increase in temperature at a specific horizon H is limited to an amount \(\tau \)”, see p. 292 in P12. In other words, society is willing to sacrifice up to a fraction \(w^*(\tau )\) of consumption to truncate the distribution f(T), so that \(T\le \tau \). More formally, if no action is taken, social welfare would be

$$\begin{aligned} W_2= & {} \iiint U({{\tilde{C}}}_t)e^{-\delta t} f(x)f_{\gamma }(y)\;\mathrm{d}t\;\mathrm{d}x\;\mathrm{d}y\nonumber \\= & {} \frac{1}{1-\eta }\iiint u({{\tilde{C}}}_t)e^{-\delta t} f(x)f_{\gamma }(y)\;\mathrm{d}t\;\mathrm{d}x\;\mathrm{d}y\nonumber \\= & {} \frac{1}{1-\eta } G_\infty , \end{aligned}$$
(9)

where the tilde emphasizes the random nature of the quantity, \(0\le t \le \infty \),Footnote 3 the uncertain temperature increase x spans the interval \(\theta _T\le x\le \infty \) and the impact coefficient \(\gamma \) is in \(\theta _\gamma \le \gamma \le \infty \).

If society sacrifices a fraction \(w^*(\tau )\) of consumption, there are two effects in the computation of social welfare: first, only the remaining part of consumption, \(C'_t=(1-w(\tau ))C_t\), is used as an argument of the utility function; second, integration with respect to the variable x will be bounded by \(\tau \) and taken with respect to a truncated and renormalized density \(f_\tau (x)\), where \(f_\tau (x)={\mathbf {1}}_{x\le \tau } f(x)/F(\tau )\) and the normalizing constant is

$$\begin{aligned} F(\tau )=\int _{\theta _T}^\tau f(x)\;\mathrm{d}x. \end{aligned}$$
(10)

Hence, given the upper threshold \(\tau \) for temperature increase, social welfare (with sacrifice) is

$$\begin{aligned} W_1(\tau )= & {} \iiint U({{\tilde{C}}}'_t) e^{-\delta t} f_\tau (x)f_{\gamma }(y)\;\mathrm{d}t\;\mathrm{d}x\;\mathrm{d}y\nonumber \\= & {} \frac{(1-w(\tau ))^{1-\eta }}{1-\eta }\iiint u({{\tilde{C}}}_t) e^{-\delta t} f_\tau (x)f_{\gamma }(y)\;\mathrm{d}t\;\mathrm{d}x\;\mathrm{d}y\nonumber \\= & {} \frac{(1-w(\tau ))^{1-\eta } }{1-\eta }G_\tau , \end{aligned}$$
(11)

where the integration domains are \(0\le t \le \infty \), \(\theta _T\le x\le \tau \) and \(\theta _\gamma \le \gamma \le \infty \), respectively, for the variables tT and \(\gamma \).

Willingness to pay \(w^*(\tau )\) is then the solution of the equation \(W_2=W_1(\tau )\). Using (9) and (11), WTP can be written as

$$\begin{aligned} w^*(\tau )=1-\left[ \frac{G_\infty }{G_\tau }\right] ^{\frac{1}{1-\eta }}. \end{aligned}$$
(12)

In this setup, WTP is a pure number that makes welfare under the scenario where the temperature increase is limited to \(\tau \) equal to the scenario involving no action being taken. Observe that, ultimately, WTP can be readily computed once the two three-dimensional integrals \(G_\infty \) and \(G_\tau \) are evaluated. It turns out that these computations are far from trivial in various constellations of parameters, and must be performed with considerable care. Indeed, in Section 5.2 of P12, a simple case is examined in which no uncertainty is assumed on T and \(\gamma \), which are replaced by a known \(T_H\) and by the mean \({\overline{\gamma }}\) of density \(f_\gamma \) (denoted as g in P12), see Fig. 4 in P12. Technically speaking, this makes the previous integrals one-dimensional and, more importantly, results can be contrasted with more general situations where uncertainty plays a role, spreading the set of feasible outcomes in ways depicted by the estimated densities for T and \(\gamma \).

Our presentation of the model differs from the one in P12 because we emphasise the fact that relevant quantities are obtained taking three-dimensional integrals, whereas slightly more abstract mean operators are used to describe the very same objects in Pindyck’s work. Incidentally, we hope that two equivalent descriptions may benefit readers or clarify both notations and their precise meanings, where appropriate.

Coming to the main concrete claim of P12, we believe it is fair to say that the author interprets his own results as an indication that “moderate abatement policies” should be pursued in the face of the large uncertainty surrounding the amount of future temperature increase and its unknown impact. This broad conclusion is stated in the abstract, in the introduction and elsewhere in the paper. In the concluding remarks, the argument takes an analogous stance: asking “whether a stringent (abatement) policy is needed now,” Pindyck states that results “are consistent with beginning slowly.” A similar lesson, we believe, can be drawn from the simplified example contained in Section 5.2 of P12: if the temperature increase is known to be \(T_H=6^\circ \)C in \(H=100\) years under business as usual and known economic impact, then willingness to pay to have no warming, \(w^*(0)\), is still only 2.2% (italics are ours).

These considerations spurred us to assess the robustness of the model, using more recent data (“Extension” section) to check how much can be retained of the gist of the original paper in different setups. However, we begin in “Verification” section with a verification of the majority of results contained in P12.

Results

Verification

Estimation of the displaced gamma density

P12 assumesFootnote 4 a displaced gamma density for both \(T_H\) and \(\gamma \). Almost all the integrals for calculating WTP involve these two random variables. The results of the paper, therefore, depend heavily on this preliminary estimation. As in P12, we fit the parameters of a displaced gamma density for the random variable \(T_H\) with \({\mathbb {E}}(T_H)=3^\circ \)C, \({\mathbb {P}}(T_H\le 7^\circ C)=5\%\) and \({\mathbb {P}}(T_H\le 10^\circ C)=1\%\). We obtain \(r_{T}=3.9, \lambda _{T}=0.92\) and \(\theta _{T}=-1.22\), which can be compared with the values reported in P12: 3.8, 0.92 and \(-1.13\), respectively. Figure 1, that replicates Fig. 1 of P12, shows the two distributions which, in spite of the slightly different parameter values, appear to be almost indistinguishable.

Fig. 1
figure 1

Distribution of temperature change \(T_H\)

The distribution of the damage coefficient \(\gamma \) is similarly calibrated, fitting a displaced gamma density such that \({\mathbb {E}}(\gamma )=0.0001363\), \({\mathbb {P}}(\gamma \le 0.0000450)=0.17\) and \({\mathbb {P}}(\gamma \le 0.0002295)=0.83\).Footnote 5

We obtain \(r_{\gamma }=4.43, \lambda _{\gamma }=20939\) and \(\theta _{\gamma }=-7.28\cdot 10^{-5}\). Our estimates are reasonably close to the values \(r_{\gamma }=4.50, \lambda _{\gamma }=21431\) and \(\theta _{\gamma }=-7.46\cdot 10^{-5}\) reported in P12, even though the numerical approximation of the parameters for \(\gamma \) was less trivial than for \(T_H\), due to the different orders of magnitude of the three parameters; for details, refer to “Discussion” section.

Estimation of willingness to pay

The main feature of P12 is the computation of several WTPs with different values taken by key parameters. WTP is displayed graphically in several figures, and also tabulated in two tables. In what follows, Figs. 2, 3 and 4 are the replications of Figs. 4, 5 and 6 of P12, respectively, while Table 1 is the replication of Table 1 of P12.

Fig. 2
figure 2

\(w^{*}(0)\) for known temperature change \(T_H\); \(\gamma =0.0001363\), \(\eta =2\), \(g_0=0.015\), 0.020, 0.025 and \(\delta =0\)

First, some benchmark WTPs are computed in a setup with no uncertainty on future temperature increase and economic impact; Fig. 2 depicts WTP \(w^*(0)\) to keep warming at zero as a function of a known \(T_H\), assuming a fixed value for \(\gamma ={\bar{\gamma }}={\mathbb {E}}(\gamma )=0.0001363\), and showing three possible scenarios for the growth rate \(g_0= 0.015\), 0.02 or 0.025 (the remaining parameters are the index of relative risk aversion \(\eta =2\) and the discount rate \(\delta =0\)). This is, therefore, a scenario with no uncertainty, where the integrals needed to evaluate WTP are one-dimensional. It is, of course, formally impossible to test whether two figures are the same, but an eyeball test of the twin figures in P12 and in this paper (and lots of zooming!) shows that they essentially display the very same quantities. Perhaps more concretely, to exemplify the meaning of Fig. 2, it is reported in P12 that when \(T=6\) and \(g_0=0.02\), then \(w^*(0)\) is about 0.022 or 2.2%. For comparison, our own computations produce 0.0216.

Second, WTPs are displayed in Fig. 3, allowing for uncertainty. In the diagram, the functions \(w^*(\tau )\) are depicted in four scenarios, combining different risk aversions \(\eta \) and baseline growth rates \(g_0\). Full three-dimensional integrals are involved in this setup, and care is needed to set apparently irrelevant (technical) parameters. It is again hard to discern any differences in the two versions of Fig. 3 from this paper and from Pindyck’s article.

Fig. 3
figure 3

\(w^{*}(\tau )\), \(T_H\) and \(\gamma \) uncertain, \(\eta =1.5, 2\), \(g_0=0.015\), 0.020, 0.025 and \(\delta =0\)

Third, we focus on Fig. 4, where the dependence of \(w^*(3)\), namely the WTP required to limit the increase in temperature to \(3^\circ \)C, is plotted as a function of risk aversion \(\eta \) (under two different discount rates \(\delta \)). This diagram is interesting because it is difficult to replicate, particularly if \(\eta \) approaches 1 or when \(\eta =4\). In the first case, we have an evident singularity in the definition of \(U(\cdot )\), and can resort to the fact that, in this situation, the utility function is—up to a constant—a logarithmic function. It is less clear why high values of \(\eta \) prove to be relatively ill-posed for the integration routine cubature. While additional details are deferred to “Discussion” section, we observe that our Fig. 4 is extremely similar to the one presented in P12.

Fig. 4
figure 4

\(w^{*}(3)\) versus \(\eta \). \(g_0=0.020\) and \(\delta =0, 0.01\)

Finally, Table 1 lists 19 pairs of WTPs, and allows for a more rigorous comparison of the numeric figures obtained, tilting the reference values of some parameters. In particular, the two WTPs \(w^*(0)\) and \(w^*(3)\) are tabulated and, unless otherwise indicated, \(\delta =0\), \(\eta =2\), \(g_0=0.02\), \({\mathbb {E}}(T)=3^\circ C\), \({\mathbb {E}}(\gamma )=0.0001363\), and social utility is computed on a time span of 5 centuries (\(t_{max}=500\)).

Table 1 WTPs with alternative parameter values

Some of our estimates of WTP (in the second and fourth columns) match those in P12 very well (placed side by side in the third and fifth columns). For instance, in the first row relative to the baseline case, the values of \(w^*(0)\) and \(w^*(3)\) differ by about \(5\cdot 10^{-4}\). In other cases, the gaps are insignificant from a practical and economic point of view, and give an idea of the “numeric noise” that affects (accurate) estimates obtained by different authors, with distinct software and code.

However, some noteworthy discrepancies can be seen in Rows 8, 16, 18 and 19. In Row 8, WTPs computed in P12 setting \(\eta =4\) and \(g_0=0.01\) are 30- or 60-fold larger than ours. Row 7 contains WTPs when the growth rate is 0.02 and, in agreement with intuition, halving the growth rate of the economy inflates the WTP to reduce the expected damage inflicted by climate change to weak economic growth. While, say, according to our computations, \(w^*(0)\) moves from 0.0015 in Row 7 to 0.0060 in Row 8 (a fourfold increase), in P12, we have a spectacular jump from 0.0014 to 0.1844. The same occurrence is visible for \(w^*(3)\). In Row 18, our \(w^*(0)\) and \(w^*(3)\) are quite smaller than WTPs in P12 (differences exceed 2 and 1% points, respectively).

These deviations cannot be attributed to slightly different computational methods being used in different packages, or to dissimilar settings of an abundance of default parameters that are used in standard routines for numerical computations.

Finally, the last row of Table 1 portrays a large difference in both WTPs. We suspect this is due to a material typing error in P12 since the entries in Pindyck’s table are exactly the same as in Row 10, whereas we expected the same figure of Row 9. As can be seen by looking at (8) of P12,Footnote 6 with \(\eta =2\) an increase in \(\delta \) compensates for a decrease in \(g_0\) of the same absolute value, implying that a scenario with \(\varepsilon (T_H)=5^\circ C\) and \(g_0=0.02\) is actually the same as a scenario where \(\varepsilon (T_H)=5^\circ C\), \(g_0=0.01\) and \(\delta =0.01\). For the same reason, the entries in Rows 1 and 16 in Table 1 should be the same, but this is not the case in P12.

All in all, while some of our estimates are close to the ones obtained by Pindyck, we were unable to replicate around one fifth of the figures in P12, and huge differences, in absolute and relative terms, cannot be explained by rounding or typing errors alone. An inspection of the MATLAB code kindly provided by Professor Pindyck shows that some of the reasons for the discrepancies may be related to an approximation of a triple integral with a summation of double integrals,Footnote 7 as well as the use of functions, such as dblquad and quad that are now—more than a decade after the original code was written—no longer recommended. Moreover, as we discuss in “Discussion” section, the results are unfortunately quite sensitive to apparently irrelevant parameters in several of the instances considered in P12. Such parameters, namely the upper extremes of integration, were hardwired in the code, and were not subject to a robustness test. In these cases, putting blind trust in the number crunching routines turned out to be unfortunate since essentially inaccurate estimates were generated.

Extension

This section provides an extension of the original paper where we change the data used to estimate densities for the temperature increase. As with every extension, since we change the data, the results shown here can obviously not be expected to resemble the ones in P12; instead, they should be used to assess how the original results are affected by the availability of new data. The IPCC’s Fifth Assessment Report (IPCC 2014), released in 2014, contains new data that can be used to estimate fresh densities for the uncertain quantities used in P12. In particular, IPCC (2014) describes four possible GHG scenarios, called representative concentration pathways (RCP), which are named after a possible range of radiative forcing values in 2081–2100 relative to pre-industrial values. The pathway without any GHG emissions mitigation effort beyond current legislation,Footnote 8 which can be considered as the baseline path for the present analysis, is RCP8.5;Footnote 9 the alternative scenarios, with increasing levels of mitigation and, therefore, decreasing levels of emissions, are RCP6.0, RCP4.5 and RCP2.6.

Under scenario RCP8.5, the forecast is a \(3.7^\circ C\) temperature increase from 1986–2005 to 2081–2100, with a “likely” range of \(2.6^\circ C\) to \(4.8^\circ C\). In Fig. 5, we extend Fig. 1 of P12 and we plot the “old” density for \(T_H\) and the “new” density, interpreting the term “likely” as a 66% or a 90% confidence interval (in P12, the term “likely” is viewed as 66%). Using the information \({\mathbb {E}}(T_H)=3.7^\circ C\), \({\mathbb {P}}(T_H\le 2.6^\circ C)=17\%\) and \({\mathbb {P}}(T_H\le 4.8^\circ C)=83\%\) to estimate the parameters of the gamma displaced density \(f_{2014}(x)\), we obtain \(r_{2014}=7.82\), \(\lambda _{2014}=2.38\) and \(\theta _{2014}=0.42\). Figure 5 shows that the distributions computed using more recent data shift to the right, and are more concentrated around the mean of 3.7\(^\circ \)C (and, in particular, the right tail is clearly much thinner than in P12 in both the versions obtained using 2014 data).

Fig. 5
figure 5

Distribution of temperature change \(T_H\), 2014 data

Observe that, with the new parameters, where \(\theta _{2014}=0.42\) is greater than zero, it is impossible to keep the temperature increment at 0 and, hence, \(w^*(0)\) cannot be estimated. Henceforth, in Table 2, we report WTPs for the same cases listed in Table 1, replacing \(w^*(0)\) with \(w^*(0.5)\), under the two interpretations of “likely”. In the third (sixth) column, we display \(w^*(0.5)\) (\(w^*(3)\)) for the various cases based on (IPCC 2007a, b, c); the fourth and fifth (seventh and eighth) columns show the values obtained from 2014 data.

Table 2 WTPs using alternative parameter values, IPCC (2014) data

Looking at \(w^*(0.5)\), a scenario related to a stricter abatement policy, WTPs most often increase moving from 2007 to 2014 assessments, meaning that the newer IPCC report implies an upward revision of WTP to curb warming to \(0.5^\circ \) C. In any case, the differences in WTPs are modest, generally around 0.2–0.3% or less. Scenarios that assume \(\varepsilon (T_H)=5^\circ C\) imply a lower WTP under 2014 data. This is related to the lower standard deviation underlying the 2014 projections since, keeping the same expected value, a lower standard deviation implies a lower WTP, as explained in P12. Changes in the standard deviation of the temperature would also alter WTP in the baseline case of P12, with \(w^*(0)=0.0118\) and \(w^*(3)=0.0053\): halving (doubling) the standard deviation, which makes extreme events less (more) likely, would decrease (increase) WTPs to 0.0107 and 0.0025 (0.0132 and 0.0091), respectively. Columns 4 and 5 also show that WTPs are virtually the same, regardless of which interpretation of “likely” was chosen.

In contrast, examination of the columns relative to \(w^*(3)\) reveals that more recent data “suggest” a lower WTP for a moderate abatement policy (namely, limiting the temperature increment to \(3^\circ C\)). With respect to the WTP estimated using 2007 data, differences range from 0.1 to 1.8%. If “likely” means 90% confidence intervals, then, typically, WTPs are further reduced by an amount ranging between 0.1 and 0.7%.

Figures 6 and 7 are the counterparts (with more recent data) of Figs. 5 and 6 in P12. Careful inspection confirms the previous findings and comments, but, perhaps more importantly, may suggest that the inclusion of fresh data from 2014 appears to not have changed the gist of the conclusions and lessons learned from P12. It is true that temperature cannot be kept at the present level, and that strict (moderate) abatement policies are slightly more (less) worthwhile, but changes are perhaps minor in size in many circumstances of practical importance. This was somehow to be expected, after the marginal analysis in P12 already pointed out that a hike in the mean temperature would have been offset by a reduction in the standard deviation.

Fig. 6
figure 6

\(w^{*}(\tau )\), \(T_H\) and \(\gamma \) uncertain, \(\eta =1.5, 2\), \(g_0=0.015, 0.020,0.025\), and \(\delta =0\), 2014 data. Observe that WTP cannot be plotted for \(\tau =0\) since \(\theta _{2014}=0.42>0\)

Fig. 7
figure 7

\(w^{*}(3)\) versus \(\eta \). \(g_0=0.020\) and \(\delta =0, 0.01\), 2014 data

Discussion

This section is devoted to the analysis of four important issues faced in the replication of P12. First, we investigate the effect of seemingly irrelevant and technical positions related to the upper extreme of integration since \(+\infty \) cannot materially be used in (most of) the commonly available numerical routines. Second, we derive \(\gamma \) in terms of \(\beta \) under a nonlinear version of 3. Then we describe why care is needed in estimating the coefficients of the gamma displaced densities estimated for \(T_H\) and \(\gamma \). Finally, we explain how the statement “with the other moments of the distribution unchanged” has to be implemented. Each of the four subsections below explores one of these issues.

On the upper limits of integration

As we have seen in “Theory” section, the quantification of the WTPs relies entirely on the computation of two (multi-dimensional) integrals in (12). In principle, these integrals are to be computed over intervals reaching \(+\infty \). For practical reason, however, the support of \(T_H\) and \(\gamma \) is truncated, and the upper limits in the computations are instead taken to be large numbers, which we denote by \(T_{max}\) and \(\gamma _{max}\). We recall that the same computational shortcut is explicitly mentioned in P12, where, say, the utility of consumption is integrated up to \(t_{\max }=500\) (or 1000 in the robustness check of Row 3 in Table 1). All the computation in this work used \(T_{max}=15\) \(^\circ C\) and \(\gamma _{max}=0.0007\). Intuitively, this is justified by the fact that truncation of the distributions should not have a large effect, provided that \(T_{max}\) and \(\gamma _{\max }\) are “big enough”.Footnote 10

Table 3 Case 1 (baseline) of Table 1. Sensitivity analysis of \(w^*(0)\) with several combinations of \(\gamma _{\max }\) \((\times 10^{-4})\) (horizontal axis) and \(T_{max}\) (vertical axis)

Table 3 shows WTP \(w^*(0)\) for the baseline combination of parameters (corresponding to Row 1 of Table 1), as a function of the upper limits of integration for \(T_H\) and \(\gamma \). Our reference value \(w^*(0)=0.0118\), obtained when \(\gamma _{max}=0.0007\) and \(T_{max}=15\), is singled out and reported in boldface in the table. It is clear that replacing \(+\infty \) in the integrals with smaller values has little consequence and, unless too small \(T_{max}\) or \(\gamma _{max}\) are chosen, the results appear to be more stable for the given set of parametric values used in the table.

Table 4 Case 8 (\(\eta =4\), \(g_0=0.010\)) of Table 1. Sensitivity analysis of \(w^*(0)\) with several combinations of \(\gamma _{max}\) \((\times 10^{-4})\) (horizontal axis) and \(T_{max}\) (vertical axis)

Unfortunately, quite a different behavior is documented in Table 4, which shows \(w^*(0)\) when \(\eta \) increases to 4 and the growth rate \(g_0\) decreases to 0.01 (see the parameters of Row 8 of Table 1). Even a cursory look at the table reveals that WTP depends heavily on \(T_{max}\) and \(\gamma _{max}\), despite the intuitive belief that they should only play a technical role in the computations, bounding the integration domain. It is quite clear that \(w^*(0)=0.0060\), which, as argued in “Verification” section, is notably different from the figure shown in P12, is not a robust estimate, and its wild fluctuations cast serious doubts on any related policy suggestions. Generally speaking, the increase of \(T_{max}\) and \(\gamma _{max}\) appears to fuel WTP up to 100%. This value is reached because there are combinations of T and \(\gamma \) in the integration domain for which consumption “grows” at a negative rate and rapidly approaches infinitesimal levels, due to the small \(g_0\) and the term \(-\gamma T\) in (3) generating (very) negative utility. The effect is enhanced by the relatively large value taken by \(\eta \), and we have already observed that several WTPs are hard to compute when \(\eta =4\), the largest value examined in P12.

Moreover, several studies (e.g. the classic work by Mehra and Prescott (1985, 2003) or the recent study by Donadelli et al. (2020)) take into consideration higher values for the risk aversion coefficient, up to \(\eta =10\). For \(\eta \ge 4\), the model appears to be even more fragile. In fact, keeping all other parameters—including the upper limits of integration—at baseline levels, WTP decreases as \(\eta \) increases: \(w^*(0)\) obtained assuming \(\eta =10\) is equal to 0.016% of GDP, which is about one tenth of the value obtained assuming \(\eta =4\) (0.15%, see case 7 of Table 1). This is expected as the implicit discount rate \(g_0\eta \) increases with \(\eta \). In contrast, in line with observations from Table 4, the estimate rockets as either \(\gamma _{max}\) or \(T_{max}\) increases. We document this phenomenon in Fig. 8, plotting the logarithm of \(w^{*}(0)\) as a function of \(\eta \) and \(\gamma _{max}\), while keeping all other parameters, including \(T_{max}\), at their baseline values. While in some cases WTP decreases smoothly in \(\eta \) as expected, see \(\gamma _{max}=0.0005\), significant increments materialize as soon as \(\gamma _{max}\) is slightly above \(1\cdot 10^{-3}\) when \(\eta =10\). This behavior is generic: as \(\eta \) increases, the threshold of \(\gamma _{max}\) at which the estimate rockets decreases notably.

Fig. 8
figure 8

Logarithm of willingness to pay \(w^{*}(0)\) (z axis) as a function of \(\gamma _{max}\) (x axis) and \(\eta \) (y axis). All the other parameters take the baseline values

The previous discussion demonstrates that reliable computations of WTP may be difficult under some circumstances or, alternatively, that the model appears to be fragile, since the results are too sensitive to “internal” inputs of the numerical software. Clearly, changes in the specification of the utility function may remove some forms of ill-posedness. Additional stability would, of course, be obtained by tolerating positive values for \(\delta \), but we explicitly stated in the literature review that the proper level of the intertemporal discount rate is at the core of the (moral) debate among scholars and economists. This is a long-lasting discussion: the issue was debated in the symposium involving Metcalf and Stock (2017), Pindyck (2017) and Weyant (2017), but, according to Dasgupta (2008) and Pindyck (2013), the different calibration of the discount rate and other crucial parameters is at the very heart of the ten-fold difference between estimates of SCC provided in Nordhaus (1994, 2014) and Stern (2007, 2008).

Non-linear damage function

Most scholars agree that very little is known about the precise form of the damage function or whether it is more appropriate to model losses in levels or growth rates of GDP. The broad issue of linearity or non-linearity has been addressed in various frameworks and models, but has not yet been solved. Just to provide one example, Dell et al. (2012) look for evidence of non-linearities in the effects of the average temperature variable, and find little support for this hypothesis as yet, whereas Bretschger and Pattakou (2019) consider a highly convex damage function rather than a linear one.

Here we briefly show that, if future research points to a non-linear damage function rather than the linear formulation of (3), the model can be generalized maintaining the convenient analytical relationship between \(\gamma \) and \(\beta \). If we suppose that

$$\begin{aligned} g_t=g_0-\gamma _\alpha T_t^{\alpha }, \end{aligned}$$
(13)

where \(\gamma _\alpha >0\) is an impact coefficient depending on the exponent \(\alpha \), following the steps outlined in “Theory” section, one can easily obtain that

$$\begin{aligned} \gamma _\alpha =\frac{\beta T_H^{2-\alpha }}{2 ^\alpha \int _{0}^{H}{\Big [1-\Big (\frac{1}{2}\Big )^{\frac{t}{H}}\Big ]^\alpha \mathrm{d}t}}. \end{aligned}$$
(14)

Note that \(\gamma _\alpha \) is a decreasing function of \(\alpha \), and when \(\alpha >1\) (\(0<\alpha <1)\), we would have smaller (greater) losses for low increases of \(T_H\) and face greater (smaller) losses for large increments of temperature.

Estimation of densities

P12 assumes that the two most important uncertain quantities of the model are distributed as displaced gamma distributions, which offer the bonus of remarkable analytical tractability. However, most of what we say would hold, with simple and obvious changes, for any distribution. Essentially, since \(f(x|r,\lambda , \theta )\) depends on three unknown parameters to be estimated, three equations should suffice; in “Verification” section, we have minimized, say for \(T_H\), the sum of squared deviations from the given moments or values of the cumulative density function, which were in turn extracted from the literature. Since the mean of a displaced gamma is known and equal to \(r/\lambda +\theta \), the sum of squared errors is

$$\begin{aligned} \left( \frac{r}{\lambda }+\theta -3\right) ^2+\left( \int _\theta ^7 f(x|r,\lambda , \theta )\;\mathrm{d}x-0.95\right) ^2+ \left( \int _\theta ^{10} f(x|r,\lambda , \theta )\mathrm{d}x\;-0.99\right) ^2, \end{aligned}$$
(15)

where the conditions \(E(T_H)=3^\circ C\), \(Pr(T_H\le 7^\circ C)=0.95\) and \(Pr(T_H\le 10^\circ C)=0.99\) can easily be seen. Even though the previous function of \(r,\lambda , \theta \) can obviously be optimized, most optimization packages implicitly assume by default that the sensitivity of the target function with respect to changes in the variables are of the same magnitude. While this is roughly true in the estimation of the density for \(T_H\), it is definitely not the case for the parameters of the density of impact \(\gamma \), which differ by several orders of magnitude and are reported in P12 to be \(r=4.5, \lambda =21341\) and \(\theta = -7.46\cdot 10^{-5}\). Typically, in similar cases, the default choice of the numerical method may not be the correct one, possibly resulting in inaccurate results. To tackle this “scaling” problem (Nash 2014), the user may add optional information to the algorithm, basically providing an educated guess of the correct magnitudes as (additional) input. In our case, we used R command optim, and accurate results were obtained only after selecting the numerical method “BFGS” and setting the option parscale=c(1,1000,0.1) to feed the optimizer with parametric scales that reflect the different orders of magnitude of the parameters involved.

To summarize, it should be kept in mind that, in this case, computational methods and scaling coefficients other than the defaults were provided to the optimizer. Obviously, care and some experience are needed to customize some details prior to minimization, and users are advised to additionally scrutinize and check the adequacy of the results, using analytical or graphical methods. The task is made much easier when replicating an existing paper that can be referred to; it appears to be much more difficult when there is no indication of the given starting points or sizes in other sources.

On sensitivity analysis

Some of the several sensitivity analyses reported in Table 1 relate to an increase in the mean \(\mu \) of \(T_H\) or \(\gamma \), or both. To help the reader “verifying our verification”, we briefly outline how to obtain the same results as P12.

The statement “with the other moments of the distribution unchanged” means that the value of \(\theta \) should not be altered, and that the values of the parameters r and \(\lambda \) must be chosen in such a way that \(\sigma ^2\) is left unchanged, that the desired \(\mu \) is obtained and that moments beyond the second are varied. Exploiting the properties of the displaced gamma distribution, it can be shown that there is a closed-form solution for r and \(\lambda \) as a function of \(\mu \), \(\sigma ^2\) and \(\theta \):

$$\begin{aligned} r=\frac{(\mu -\theta )^2}{\sigma ^2} \end{aligned}$$
(16)

and

$$\begin{aligned} \lambda =\frac{(\mu -\theta )}{\sigma ^2}. \end{aligned}$$
(17)

We adopted this procedure and obtained estimates for Cases 9–14 in Table 1 that are virtually identical to the ones contained in P12.

Conclusions

In this paper, we present a verification and an extension of Pindyck (2012). Retracing the path followed by the author enabled us to verify the accuracy of the estimates of willingness to pay in some cases. However, we find critical discrepancies in about 20% of the cases under investigation. Two of these discrepancies may possibly be due to material typing errors, but the others are more fundamental and likely related to specific values taken by the parameters \(\eta \) and to technical settings of the integration routines (or, more explicitly, to the way integrals on the positive real line are approximated on bounded domains). Further research is needed to gain insights into the convergence of WTP in some critical and policy-relevant cases. Moreover, additional analysis would be welcomed at least to shed (empirical and theoretical) light on fundamental modeling choices related to the selection of Eq. (3) on the growth rate and on possible non-linear extensions, as mentioned in “Discussion” section.

We further explored the application of different settings through an extension showing that the framework of P12 can be applied to more recent data from the IPCC’s Fifth Assessment Report (2014). The re-estimation appears to be broadly consistent with the original statement that willingness to pay, given what was known and the sheer uncertainty involved, should be “moderate”, in Pindyck’s interpretation of this term.

We believe that the most crucial outcome of this work was demonstrating that some results, which are based on overlooked but demanding computations, critically depend on the values of technical parameters of the numeric algorithms, such as the upper extremes of integration, and cannot be trusted. The fragility of this version of the model exemplifies why excruciating effort is often needed to verify the results obtained by other scholars, even in this case, where contextual information was available in an extended working paper written by Pindyck on the very same model.

While we hope that our efforts make a contribution to the ongoing discussion on the usefulness of verifying and extending scientific studies, we are well aware that even great endeavors will surely be required to understand and mitigate the adverse effects of climate change.