See also: [[=Mervyn King & John Kay]]; [[=David Tuckett]]
## The Use and Misuse of Models for Climate Policy
https://www.journals.uchicago.edu/doi/10.1093/reep/rew012
In recent articles I have argued that integrated assessment models (IAMs) have flaws that make them close to useless as tools for policy analysis. **IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy**. However, some economists and climate scientists have claimed that we need to use some kind of model for policy analysis and that IAMs can be structured and used in ways that correct for their shortcomings. For example, **it has been argued that although we know very little about key relationships in the model, we can get around this problem by attaching probability distributions to various parameters and then simulating the model using Monte Carlo methods**. I argue that this would buy us nothing and that a simpler and more transparent approach to the design of climate change policy is preferable. I briefly outline what such an approach would look like.
---
*Highlight [1]:* “Pay no attention to the man behind the curtain!” — L. Frank Baum, The Wonderful Wizard of Oz
*Highlight [1]:* IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy.
*Highlight [1]:* (1) Certain inputs—functional forms and parameter values—are arbitrary, but they can have huge effects on the results the models produce.
*Highlight [2]:* (2) We know very little about climate sensitivity,
*Highlight [2]:* (3) One of the most important parts of an IAM is the damage function, i.e., the relationship between an increase in temperature and gross domestic product (GDP; or the growth rate of GDP). When assessing climate sensitivity, we can at least draw on the underlying physical science and argue coherently about the relevant probability distributions. But when it comes to the damage function, we know virtually nothing—there is no theory and are no data that we can draw from. 3As a result, developers of IAMs have little choice but to specify what are essentially arbitrary functional forms and corresponding parameter values.
*Highlight [2]:* (4) IAMs can tell us nothing about “tail risk,” i.e., the likelihood or possible impact of a catastrophic climate outcome, such as a temperature increase above 5 _C, that has a very large impact on GDP. And yet it is the possibility of a climate catastrophe that is (or should be) the main driving force behind a stringent abatement policy.
*Highlight [2]:* (1) All models have flaws—after all, any model is a simplification of reality—and yet economists build and use models all the time.
*Highlight [3]:* (2) Yes, there is uncertainty over climate sensitivity, and we know very little about the damages likely to result from higher temperatures. But can’t our uncertainty over climate sensitivity or the “correct” damage function be handled by assigning probability distributions to certain key parameters and then running Monte Carlo simulations?
*Highlight [3]:* (3) We have no alternative. We must develop the best models possible in order to estimate the social cost of carbon and/or evaluate particular policies. In other words, working with even a highly imperfect model is better than having no model at all.
*Underline [3]:* (3) We have no alternative.
*Underline [3]:* In other words, working with even a highly imperfect model is better than having no model at all.
*Highlight [3]:* (4) Finally, if we don’t use IAMs, how can we possibly estimate the SCC and evaluate alternative greenhouse gas (GHG) abatement polices? Should we rely instead on expert opinion? And don’t experts have some kind of implicit mental models that drive their opinions? If so, isn’t it better to make the model explicit?
*Underline [3]:* don’t experts have some kind of implicit mental models that drive their opinions? If so, isn’t it better to make the model explicit?
*Highlight [3]:* I will argue that the use of IAMs to estimate the SCC or evaluate alternative policies is problematic because it creates a veneer of scientific legitimacy that is misleading.
*Highlight [3]:* I argue that the best we can do is rely on “expert” opinion, perhaps combined with relatively simple, transparent, and easy-to-understand models. After all, the ad hoc equations that go into most IAMs are no more than reflections of the modeler’s own “expert” opinion.
*Underline [3]:* I argue that the best we can do is rely on “expert” opinion, perhaps combined with relatively simple, transparent, and easy-to-understand models.
*Underline [3]:* the ad hoc equations that go into most IAMs are no more than reflections of the modeler’s own “expert” opinion.
*Highlight [4]:* As I explained in Pindyck (2013a), many of the key relationships and parameter values in these models have no empirical (or even theoretical) grounding and thus the models cannot be used to provide any kind of reliable quantitative policy guidance.4
*Highlight [4]:* Although some developers of IAMs understand that there is considerable uncertainty over climate sensitivity and that we don’t know what the “correct” damage function is, they think they have a solution to this problem. In particular, they believe that the uncertainty can be handled by assigning probability distributions to certain key parameters and then running Monte Carlo simulations. Unfortunately, this won’t help. The problem is that we don’t know the correct probability distributions that should be applied to various parameters, and different distributions—even if they all have the same mean and variance—can yield very different results for expected outcomes, and thus for estimates of the SCC.5
*Underline [4]:* we don’t know the correct probability distributions that should be applied to various parameters, and different distributions—even if they all have the same mean and variance—can yield very different results for expected outcomes, and thus for estimates of the SCC.5
*Highlight [5]:* What can we possibly learn from assigning arbitrary probability distributions to the parameters of an arbitrary function and running Monte Carlo simulations? I would argue that the answer is nothing. The bottom line here is simple: If we don’t understand how A affects B, but we create some kind of model of how A affects B, running Monte Carlo simulations of the model won’t make up for our lack of understanding.
*Underline [5]:* If we don’t understand how A affects B, but we create some kind of model of how A affects B, running Monte Carlo simulations of the model won’t make up for our lack of understanding.
*Highlight [5]:* the argument is that working with even a highly imperfect model is better than having no model at all. This might be a valid argument if we were honest and up-front about the limitations of the model. But often we are not.
*Highlight [6]:* Models sometimes convey the impression that we know much more than we really do. They create a veneer of scientific legitimacy that can be used to bolster the argument for a particular policy. This is particularly true for IAMs, which tend to be large and complicated and are not always well documented. I
*Highlight [6]:* although it is not clear exactly what is going on, since the black box is “scientific,” we are supposed to take those results seriously and use them for policy analysis. A couple of examples might help to clarify this point.
*Highlight [7]:* I believe that we need to be much more honest and up-front about the inherent limitations of IAMs. I doubt that the developers of IAMs have any intention of using them in a misleading way. Nevertheless, overselling their validity and claiming that IAMs can be used to evaluate policies and determine the SCC can end up misleading researchers, policymakers, and the public, even if it is unintentional. If economics is indeed a science, scientific honesty is paramount.
*Highlight [8]:* The Modeler Has Too Much Flexibility Put simply, it is much too easy to use a model to generate, and thus seemingly validate, the results one wants.
*Highlight [8]:* Thus a modeler whose prior beliefs are that a stringent abatement policy is (or is not) needed, can choose a low (or high) discount rate or choose other inputs that will yield the desired results. If there were a clear consensus on the correct values of key parameters, this would not be much of a problem. But (putting it mildly) there is no such consensus.
*Highlight [9]:* . The point here is that there is hardly any need for a model; decide on the discount rate and climate sensitivity and you pretty much have an estimate of the SCC. The model itself is almost a distraction.
*Underline [9]:* The point here is that there is hardly any need for a model; decide on the discount rate and climate sensitivity and you pretty much have an estimate of the SCC. The model itself is almost a distraction
*Highlight [9]:* So, is the SCC small or large? To answer that we only have to agree on climate sensitivity and the discount rate. 8We don’t necessarily have to agree on which model to use.
*Highlight [10]:* what really matters for the SCC is the likelihood and possible impact of a catastrophic climate outcome: a much larger than expected temperature increase and/or a much larger than expected reduction in GDP caused by even a moderate temperature increase. IAMs, however, simply cannot account for catastrophic outcomes. 9
*Highlight [10]:* as with the rest of the damage function, the specification of the threshold and the extent to which GDP decreases when the threshold is crossed are arbitrary and not based on any theory or empirics, and thus they cannot tell us much about would happen if the temperature increase turns out to be very large. The damage function, with or without “tipping points,” can do little more than reflect the beliefs of the modeler. How do we know that the possibility of a catastrophic outcome is what really matters for the SCC and the design of climate policy? Because unless we are ready to accept a discount rate that is very small, the “most likely” scenarios for climate change simply don’t generate enough damages—in present value terms—to matter. 1
*Highlight [11]:* What we have to worry about is the possibility of a climate-induced decrease in GDP so large as to be considered catastrophic.
*Highlight [11]:* Economists often build models to avoid relying on subjective
*Highlight [12]:* (expert or otherwise) opinions. But it is important to keep in mind that the inputs to IAMs (equations and parameter values) are already the result of “expert” opinion—in this case, the modeler is the “expert.”
*Highlight [12]:* we would use expert opinion to determine the inputs to a simple, transparent, and easy-to-understand model (and I stress the importance of easy-to-understand). As an example of how this might be done, start with three or four potential catastrophic outcomes that, under business as usual, might occur, say, 50 years in the future. Those outcomes might be a 10, 30, or 50 percent drop in GDP and consumption (or something worse). Now attach probabilities to those outcomes, say .2, .1, and .05, respectively (so the probability of no catastrophe is .65). Given these outcomes and probabilities, and given a discount rate, we can calculate the present value of the expected benefits from avoiding these outcomes. Next, come up with an estimate (or set of estimates and associated probabilities) of the reduction in CO 2emissions needed to eliminate the catastrophic scenarios.
*Underline [12]:* we would use expert opinion to determine the inputs to a simple, transparent, and easy-to-understand model (and I stress the importance of easy-to-understand).
*Highlight [12]:* Yes, the calculations I have just described constitute a “model,” but it is a model that is exceedingly simple and straightforward and involves no pretense that we know the damage function, the feedback parameters that affect climate sensitivity, or other details of the climate–economy system. And yes, some experts might base their opinions on one or more IAMs, on a more limited climate science model, or simply on their research experience and/or general knowledge of climate change and its impact.
*Underline [12]:* nsufficiently precise. But I believe that we have no choice. Building and using elaborate models might allow us to think that we are approaching the climate policy problem more scientifically, but in the end, like the Wizard of Oz, we would only be drawing a curtain around our lack of knowledge.
*Highlight [12]:* Some might argue that the approach I have outlined here is insufficiently precise. But I believe that we have no choice. Building and using elaborate models might allow us to think that we are approaching the climate policy problem more scientifically, but in the end, like the Wizard of Oz, we would only be drawing a curtain around our lack of knowledge.
*Highlight [13]:* It would certainly be nice if the problems with IAMs simply boiled down to an imprecise knowledge of certain parameters, because then uncertainty could be handled by assigning probability distributions to those parameters and then running Monte Carlo simulations. Unfortunately, not only do we not know the correct probability distributions that should be applied to these parameters, we don’t even know the correct equations to which those parameters apply. Thus the best one can do at this point is to conduct a simple sensitivity analysis on key parameters, which would be more informative and transparent than a Monte Carlo simulation using ad hoc probability distributions.
*Underline [13]:* It would certainly be nice if the problems with IAMs simply boiled down to an imprecise knowledge of certain parameters, because then uncertainty could be handled by assigning probability distributions to those parameters and then running Monte Carlo simulations. Unfortunately, not only do we not know the correct probability distributions that should be applied to these parameters, we don’t even know the correct equations to which those parameters apply. T
*Highlight [13]:* How probable is such an outcome (or set of outcomes) and how bad would it (they) be? And by how much would emissions have to be reduced to avoid these outcomes? I have argued that the best we can do at this point is to come up with plausible answers to these questions, most likely by relying at least in part on numbers supplied by climate scientists and environmental economists, that is, utilize expert opinion. This kind of analysis would be simple, transparent, and easy to understand. It might not inspire the kind of awe and sense of scientific legitimacy conveyed by a large-scale IAM, but that is exactly the point. It would draw back the curtain and help us to clarify our beliefs about climate change and its impacts.
*Underline [13]:* It might not inspire the kind of awe and sense of scientific legitimacy conveyed by a large-scale IAM, but that is exactly the point.
## Averting Catastrophes: The Strange Economics of Scylla and Charybdis
https://www.aeaweb.org/articles?id=10.1257/aer.20140806
Faced with numerous potential catastrophes—nuclear and bioterrorism, mega-viruses, climate change, and others—which should society attempt to avert? A policy to avert one catastrophe considered in isolation might be evaluated in cost-benefit terms. But because society faces multiple catastrophes, simple cost-benefit analysis fails: even if the benefit of averting each one exceeds the cost, we should not necessarily avert them all. We explore the policy interdependence of catastrophic events, and develop a rule for determining which catastrophes should be averted and which should not. (JEL D61, Q51, Q54)