Models and Forecasting
From Climate Change Reconsidered, a work of the Nongovernmental International Panel on Climate Change
J. Scott Armstrong, professor, The Wharton School, University of Pennsylvania and a leading figure in the discipline of professional forecasting, has pointed out that forecasting is a practice and discipline in its own right, with its own institute (International Institute of Forecasters, founded in 1981), peer-reviewed journal (International Journal of Forecasting), and an extensive body of research that has been compiled into a set of scientific procedures, currently numbering 140, that must be used to make reliable forecasts (Principles of Forecasting: A Handbook for Researchers and Practitioners, by J. Scott Armstrong, Kluwer Academic Publishers, 2001).
According to Armstrong, when physicists, biologists, and other scientists who do not know the rules of forecasting attempt to make climate predictions based on their training and expertise, their forecasts are no more reliable than those made by nonexperts, even when they are communicated through complex computer models (Armstrong, 2001). In other words, forecasts by scientists, even large numbers of very distinguished scientists, are not necessarily scientific forecasts. In support of his position, Armstrong and a colleague cite research by Philip E. Tetlock (2005), a psychologist and professor of organizational behavior at the University of California, Berkeley, who “recruited 288 people whose professions included ‘commenting or offering advice on political and economic trends.’ He asked them to forecast the probability that various situations would or would not occur, picking areas (geographic and substantive) within and outside their areas of expertise. By 2003, he had accumulated more than 82,000 forecasts. The experts barely if at all outperformed non-experts and neither group did well against simple rules” (Green and Armstrong, 2007). The failure of expert opinion to lead to reliable forecasts has been confirmed in scores of empirical studies (Armstrong, 2006; Craig et al., 2002; Cerf and Navasky, 1998; Ascher, 1978) and illustrated in historical examples of incorrect forecasts made by leading experts (Cerf and Navasky, 1998).
In 2007, Armstrong and Kesten C. Green of Monash University conducted a “forecasting audit” of the IPCC Fourth Assessment Report (Green and Armstrong, 2007). The authors’ search of the contribution of Working Group I to the IPCC “found no references … to the primary sources of information on forecasting methods” and “the forecasting procedures that were described [in sufficient detail to be evaluated] violated 72 principles. Many of the violations were, by themselves, critical.”
One principle of scientific forecasting Green and Armstrong say the IPCC violated is “Principle 1.3 Make sure forecasts are independent of politics.” The two authors write, “this principle refers to keeping the forecasting process separate from the planning process. The term ‘politics’ is used in the broad sense of the exercise of power.” Citing David Henderson (Henderson, 2007), a former head of economics and statistics at the Organization for Economic Cooperation and Development (OECD), they say “the IPCC process is directed by non-scientists who have policy objectives and who believe that anthropogenic global warming is real and danger.” They conclude:
The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainty and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
Scientists working in fields characterized by complexity and uncertainty are apt to confuse the output of models—which are nothing more than a statement of how the modeler believes a part of the world works—with real-world trends and forecasts (Bryson, 1993). Computer climate modelers certainly fall into this trap, and they have been severely criticized for failing to notice that their models fail to replicate real-world phenomena by many scientists, including Balling (2005), Christy (2005), Essex and McKitrick (2007), Frauenfeld (2005), Michaels (2000, 2005, 2009), Pilkey and Pilkey-Jarvis (2007), Posmentier and Soon (2005), and Spencer (2008).
Canadian science writer Lawrence Solomon (2008) interviewed many of the world’s leading scientists active in scientific fields relevant to climate change and asked them for their views on the reliability of the computer models used by the IPCC to detect and forecast global warming. Their answers showed a high level of skepticism.
Princeton’s Freeman Dyson has written elsewhere, “I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry, and the biology of fields and farms and forests. They do not begin to describe the real world that we live in” (Dyson, 2007).
Many of the scientists cited above observe that computer models can be “tweaked” to reconstruct climate histories after the fact, as the IPCC points out in the passage quoted at the beginning of this chapter. But this provides no assurance that the new model will do a better job forecasting future climates, and indeed points to how unreliable the models are. Individual climate models often have widely differing assumptions about basic climate mechanisms but are then “tweaked” to produce similar forecasts. This is nothing like how real scientific forecasting is done.
Kevin Trenberth, a lead author along with Philip D. Jones of chapter 3 of the Working Group I contribution to the IPCC’s Fourth Assessment Report, replied to some of these scathing criticisms on the blog of the science journal Nature. He argued that “the IPCC does not make forecasts” but “instead proffers ‘what if’ projections of future climate that correspond to certain emissions scenarios,” and then hopes these “projections” will “guide policy and decision makers” (Trenberth, 2007). He says “there are no such predictions [in the IPCC reports] although the projections given by the Intergovernmental Panel on Climate Change (IPCC) are often treated as such. The distinction is important.”
This defense is hardly satisfactory. As Green and Armstrong point out, “the word ‘forecast’ and its derivatives occurred 37 times, and ‘predict’ and its derivatives occurred 90 times in the body of Chapter 8” of the Working Group I report, and a survey of climate scientists conducted by those same authors found “most of our respondents (29 of whom were IPCC authors or reviewers) nominated the IPCC report as the most credible source of forecasts (not ‘scenarios’ or ‘projections’) of global average temperature.” They conclude that “the IPCC does provide forecasts.” We agree, and add that those forecasts are unscientific and therefore likely to be wrong.
Through a study on aerosol emissions, Zubler et al. (2011) further tested the reliability and validity of climate modeling. Using the COSMO-CLM (Doms and Schättler, 2002) Regional Climate Model (RCM), the authors attempted to demonstrate that changes in aerosol emissions over Europe leads to changes in the radiation budgets over the region. They compared the RCM of both natural and anthropogenic aerosols with climatologically averaged aerosols and aerosol emissions which changed with time (transient). With these results, the authors inferred that the RCM underestimated the real trends of aerosol emissions and overestimated the amount of cloud fractioning, suggesting that processes occurring beyond the model domain were responsible for the discrepancy. The results of this study reveal the dominance of natural variations in driving the surface temperature changes within the European Region. The RCM still over and underestimated certain quantities.
To test whether current climate models are reliable predictors of future climate change, Crook and Forster (2011) compared observed global, Arctic and tropical ocean surface temperatures with a number of coupled ocean-atmosphere climate models. They also performed "optimal fingerprinting analyses on the components of surface temperature response to test their forcing, feedback and heat storage responses." The models involved in these tests were those of the World Climate Research Programme's Coupled Model Intercomparison Project phase 3 or CMIP3.
The two University of Leeds (UK) researchers found that tropical 20th-century warming was too large and Arctic amplification too low in the Geophysical Fluid Dynamics Laboratory CM2.1 model, the Meteorological Research Institute CGCM232a model, and the MIROC3(hires) model "because of unrealistic forcing distributions," and they determined that "the Arctic amplification in both National Center for Atmospheric Research models is unrealistically high because of high feedback contributions in the Arctic compared to the tropics." In addition, they report that "few models reproduce the strong observed warming trend from 1918 to 1940," noting that "the simulated trend is too low, particularly in the tropics, even allowing for internal variability, suggesting there is too little positive forcing or too much negative forcing in the models at this time." It appears that today’s models are still lacking in many aspects of their ability to faithfully reproduce the climate of the past, which renders their ability to accurately portray the climate of the future rather questionable.
Another example of an unreliable climate change model is the AR4 General Circulation Model (GCM). The IPCC used this model to predict that the 21st century would bring increased warming in the tropical upper troposphere, having a critical influence on water vapor, lapse rate and cloud feedbacks. Fu et al. (2011) set out to test the model; they examined trends of temperature difference between the tropical upper- and lower-middle troposphere based on satellite microwave sounding unit (MSU) observations, after which they compared them with AR4 GCM simulations for the period 1979-2010. They found that the AR4 GCM overestimated tropospheric warming for that period by more than 60%, in comparison with the observed temperature changes. It is because of this discrepancy that the authors caution the use of climate models and question their ability to correctly foretell the planet’s climate future.
A 2011 study published in Science Magazine (Schmittner et al., Oregon State University) with results that imply a lower probability of imminent extreme climate change than previously thought. Models that predict extreme climate change scenarios with temperature increases of up to 10°F are implausible. Instead, according to the study, if atmospheric concentrations of CO2 were to double, the global average surface temperature would increase by 3 to 4.7°F.
Researchers led by Schmittner focused on the climate during the Last Glacial Maximum, from 19,000 to 23,000 years ago, combining extensive sea and land surface temperature reconstructions, to estimate how sensitive the climate system is to altered amounts of CO2. They found that previous climate models used an exaggerated degree of sensitivity, overestimating the cooling that occurred during this period, depicting a planet completely covered in ice, when in fact evidence shows the tropics and sub-tropics were largely-ice free. Consequently, these models overestimate future temperature increases to the same degree. Even the IPCC concedes that progress in modeling “enables an assessment that climate sensitivity is likely to be in the range of 2 to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values.”
In an exercise designed to illustrate the extent of mismatches between IPCC Fourth Assessment Report (AR4) models and observations, Fu et al. (2011) evaluated individual runs of AR4 models in a simulation of past global mean temperature, focusing on the performance of individual runs of models included in the Coupled Model Intercomparison Project phase three (CMIP3) in simulating the multi-decadal variability (MDV) of the past global mean temperature. They determined that "most of the individual model runs fail to reproduce the MDV of past climate, which may have led to the overestimation of the projection of global warming for the next 40 years or so." More specifically, they note that simply taking into account the impact of the Atlantic Multi-decadal Oscillation (AMO), "the global average temperature could level off during the 2020s-2040s," such that the true temperature change between 2011 and 2050 "could be much smaller than the AR4 projection."
Additional information on this topic, including reviews of climate model inadequacies not discussed here, can be found at http://www.co2science.org/ subject/m/subject_m.php under the heading Models of Climate.
Armstrong, J.S. 2001. Principles of Forecasting – A Handbook for Researchers and Practitioners. Kluwer Academic Publishers, Norwell, MA.
Armstrong, J.S. 2006. Findings from evidence-based forecasting: Methods for reducing forecast error. International Journal of Forecasting 22: 583-598.
Ascher, W. 1978. Forecasting: An Appraisal for Policy Makers and Planners. Johns Hopkins University Press. Baltimore, MD.
Balling, R.C. 2005. Observational surface temperature records versus model predictions. In Michaels, P.J. (Ed.) Shattered Consensus: The True State of Global Warming. Rowman & Littlefield. Lanham, MD. 50-71.
Bryson, R.A. 1993. Environment, environmentalists, and global change: A skeptic’s evaluation. New Literary History: 24: 783-795.
Cerf, C. and Navasky, V. 1998. The Experts Speak. Johns Hopkins University Press. Baltimore, MD.
Christy, J. 2005. Temperature changes in the bulk atmosphere: beyond the IPCC. In Michaels, P.J. (Ed.) Shattered Consensus: The True State of Global Warming. Rowman & Littlefield. Lanham, MD. 72-105.
Climate Change Reconsidered: Website of the Nongovernmental International Panel on Climate Change. http://www.nipccreport.org/archive/archive.html
Craig, P.P., Gadgil, A., and Koomey, J.G. 2002. What can history teach us? A retrospective examination of long-term energy forecasts for the United States. Annual Review of Energy and the Environment 27: 83-118.
Crook, J.A. and Forster, P.M. 2011. A balance between radiative forcing and climate feedback in the modeled 20th century temperature response. Journal of Geophysical Research 116: 10.1029/2011JD015924.
Dyson, F. 2007. Heretical thoughts about science and society. Edge: The Third Culture. August.
Essex, C. and McKitrick, R. 2002. Taken by Storm. The Troubled Science, Policy and Politics of Global Warming. Key Porter Books. Toronto, Canada.
Freedman, Andrew. “Most Dire Global Warming Forecasts Unlikely, Study Finds.” Washington Post – Blogs: Capital Weather Gang – Washingtonpost.com. 28 Nov. 2011. <http://www.washingtonpost.com/blogs/capital-weather-gang/post/most-dire-global-warming-forecasts-unlikely-study-finds/2011/11/27/gIQAz2er4N_blog.html>
Frauenfeld, O.W. 2005. Predictive skill of the El Niño-Southern Oscillation and related atmospheric teleconnections. In Michaels, P.J. (Ed.) Shattered Consensus: The True State of Global Warming. Rowman & Littlefield. Lanham, MD. 149-182.
Fu, Q., Manabe, S. and Johanson, C.M. 2011. On the warming in the tropical upper troposphere: Models versus observations. Geophysical Research Letters 38: 10.1029/2011GL048101.
Fu, C.-B., Qian, C., and Wu, Z.-H. 2011. Projection of global mean surface air temperature changes in next 40 years: Uncertainties of climate models and an alternative approach. Science China Earth Sciences 54: 1400-1406.
Green, K.C. and Armstrong, J.S. 2007. Global warming: forecasts by scientists versus scientific forecasts. Energy Environ. 18: 997–1021.
Henderson, D. 2007. Governments and climate change issues: The case for rethinking. World Economics 8: 183-228.
Michaels, P.J. 2009. Climate of Extremes: Global Warming Science They Don’t Want You to Know. Cato Institute. Washington, DC.
Michaels, P.J. 2005. Meltdown: The Predictable Distortion of Global Warming by Scientists, Politicians and the Media. Cato Institute, Washington, DC.
Michaels, P.J. 2000. Satanic Gases: Clearing the Air About Global Warming. Cato Institute. Washington, DC.
Pilkey, O.H. and Pilkey-Jarvis, L. 2007. Useless Arithmetic. Columbia University Press, New York.
Posmentier, E.S. and Soon, W. 2005. Limitations of computer predictions of the effects of carbon dioxide on global temperature. In Michaels, P.J. (Ed.) Shattered Consensus: The True State of Global Warming. Rowman & Littlefield. Lanham, MD. 241-281.
Solomon, L. 2008. The Deniers: The World Renowned Scientists Who Stood Up Against Global Warming Hysteria, Political Persecution, and Fraud**And those who are too fearful to do so. Richard Vigilante Books. Minneapolis, MN.
Spencer, R. 2008. Climate Confusion: How Global Warming Hysteria Leads to Bad Science, Pandering Politicians and Misguided Policies that Hurt the Poor. Encounter Books. New York, NY.
Tetlock, P.E. 2005. Expert Political Judgment—How Good Is It? How Can We Know? Princeton University Press, Princeton, NJ.
Trenberth, K.E. 2007. Global warming and forecasts of climate change. Nature blog. http://blogs.nature.com/ climatefeedback/2007/07/global_warming_and_forecasts_o.html. Last accessed May 6, 2009.
Zubler, E.M, Folini, D., Lohmann, U., Lüthi, D., Schär, C. and Wild, M. 2011. Simulation of dimming and brightening in Europe from 1958 to 2001 using a regional climate model. Journal of Geophysical Research 116: D18205, doi:10.1029/2010JD015396.