The “social cost of carbon” is a calculation that the Biden administration is looking to use to justify stringent regulation of carbon dioxide emissions.
Missouri Attorney General Eric Schmitt—joined by Arkansas, Arizona, Indiana, Kansas, Montana, Nebraska, Ohio, Oklahoma, South Carolina, Tennessee, and Utah—have now filed a lawsuit, arguing that the use of this metric in policy would constitute an overreach of government power.
Carbon dioxide is an odorless gas that is the basis for almost all plant life on earth. Of course, carbon dioxide emissions have slightly warmed the surface temperature of the planet, but on the other hand, access to plentiful energy has been accompanied by a doubling of life expectancy in the developed world and an elevenfold growth in personal wealth, as noted in the 2016 book “Lukewarming.”
A recent commentary that one of us (Kevin Dayaratna) published, titled “Why the Social Cost of Carbon Is the Most Useless Number You’ve Never Heard Of,” presented years of research on the topic conducted at The Heritage Foundation’s Center for Data Analysis.
He noted how easy it is to artificially raise the social cost of carbon by manipulating various assumptions in the calculation, including the discount rate, which is essentially an estimate of how much money invested today in say, the stock market, will grow in the future.
The Office of Management and Budget has recommended calculating things like the social cost of carbon with discount rates of 3%, 5%, and 7%. Obviously, at the higher rates, the social cost of carbon becomes pretty low. Using a 3% discount rate, the Biden administration hiked the social cost of carbon up to $51 per ton, a significant increase from the Trump administration’s $7 per ton.
Even that might not do for the Biden administration, which could rely upon the recent arguments made by University of Chicago economist Michael Greenstone, who said that the discount rate should be 2% or lower.
Additionally, in order to determine the social cost of carbon, we need to have a good idea of how much the planet’s surface will warm under various policy scenarios.
To calculate this level, scientists have for decades used computer models to find the “sensitivity” of climate to an arbitrary amount of carbon dioxide emissions. This sensitivity is usually the calculated warming, in terms of temperature, for a doubling of atmospheric carbon dioxide.
Here is a dirty little secret that few are aware of: All those horrifying future temperature changes that grace the front pages of papers of record aren’t really the predicted warming above today’s level. Instead, they are the difference between two models of climate change.
The “base climate” isn’t the observed global temperature at a given point in time. Instead, it is what a computer model simulates temperatures to be prior to any significant changes in carbon dioxide.
Reality need not apply to these calculations. And there are sometimes very big differences between the base models and reality, especially in the high latitudes of both hemispheres, and over the world’s vast tropics.
The usual procedure is then to instantaneously quadruple carbon dioxide and let the model spin up to an equilibrium climate. Then—hold onto your hat—that number is divided by two, taking advantage of the fact that warming varies linearly with increasing carbon dioxide, something that has been known for a long time. The final figure is called the equilibrium climate sensitivity to doubled carbon dioxide.
With regard to the equilibrium climate sensitivity, climate science is very odd: The more money we spend studying it, the more uncertain our forecasts become.
This fact is becoming increasingly obvious as a new suite of models is emerging that will be incorporated in the next climate science report from the U.N.’s Intergovernmental Panel on Climate Change, to be released next year.
For decades, there was no real narrowing of the range of the equilibrium climate sensitivity, since a 1979 National Academy of Sciences report, “Carbon Dioxide and Climate: A Scientific Assessment,” chaired by Jule Charney of the Massachusetts Institute of Technology.
The “Charney Sensitivity,” as it came to be called, was 1.5-4.5 C for the lower atmospheric warming that would be caused by a doubling of carbon dioxide.
Subsequent assessments, such as some of the serial “scientific assessments” of the Intergovernmental Panel on Climate Change, gave the same range, or something very close.
Periodically, the U.S. Department of Energy runs what it calls “coupled model intercomparison projects.” The penultimate one, used in the 2013 Intergovernmental Panel on Climate Change assessment, contained 32 families of models with a sensitivity range of 2.1-4.7 C, and a mean value of 3.4 C—i.e., warmer lower and mean values than Charney.
Nevertheless, the Intergovernmental Panel on Climate Change rounded this range back to the good old 1.5-4.5 C, because there was some skepticism about the warmer models.
Despite these differences between various base climate models and the doubled carbon dioxide calculation, reality-based calculations of the equilibrium climate sensitivity by other researchers yield much lower sensitivities, between 1.4 and 1.7 C.
The new coupled model intercomparison projects model suite, on the other hand, displays an even larger range of sensitivity beyond what has been observed. The range of models currently available (which is most of them), is 1.8-5.6 C, and an estimate of the mean is 4 C, and is likely what the Biden administration may very well use to determine the social cost of carbon.
So, sadly, the new coupled model intercomparison project models are worse than the older ones.
A 2017 study shows that, with one exception, the older coupled model intercomparison project models made large systematic errors over the entire globe’s tropics. The exception was a Russian model, which also had the lowest sensitivity of all, at 2.05 C.
Last year, researchers examined the new coupled model intercomparison projects model suite, and what they found was not good:
Rather than being resolved, the problem has become worse, since now every member of the CMIP6 generation of climate models exhibits an upward bias in the entire global troposphere as well as in the tropics.
A very recent paper just published in Geophysical Research Letters indicates that it may be that new estimates of the enhancements of clouds by human aerosol emissions are the problem. Interestingly, the model that has the least cloud interaction is the revised Russian model, and its sensitivity is down to 1.8 C, but it nonetheless still overpredicts observed global warming.
When it became apparent that the new models were predicting even more warming than their predecessors, Paul Voosen, the climate correspondent at Science magazine, interviewed a number of climate scientists and found that the new, “improved” renditions of the cloud-aerosol interaction is causing real problems, either completely eliminating any warming in the 20th century or producing far too much.
One of the scientists involved, Andrew Gettelman, told Voosen that “it took us a year to work that out,” proving yet again that climate scientists modify their models to give what French modeler Frederic Hourdin called an “anticipated acceptable result.”
Acceptable to whom? Hourdin’s landmark paper clearly indicates that it is scientists, not objective science, who subjectively decide how much warming looks right.
The implications of the systematic problems with coupled model intercomparison project models and other manipulated models on the social cost of carbon may be big: The Biden administration will rely on these models to beef up the social cost of carbon as well.
In fact, the Obama administration had done so by using an outdated equilibrium climate sensitivity distribution that was not grounded in reality that inflated its social cost of carbon estimates.
In fact, peer-reviewed research conducted by Kevin Dayaratna, Pat Michaels, Ross McKitrick, and David Kreutzer in two separate journals has illustrated that that under reasonable and realistic assumptions for climate sensitivity, alongside other assumptions, the social cost of carbon may effectively be zero or even negative.
It is now apparent that the reason for using the social cost of carbon to begin with is very simple: to be able to control the energy, agricultural, and industrial sectors of the economy, which will result in big costs for ordinary Americans with little to no climate benefit in return.
So altogether, we have one manipulated class of models—models determining climate sensitivity—likely being used as a basis for manipulating the social cost of carbon. The consequences on the social cost of carbon’s uncertainty are profound.
As a result, the public should be very cautious about accepting new calculations of the social cost of carbon. Although the social cost of carbon is based on an interesting class of statistical models, its use in policy should also serve a case study of model manipulation at its finest.
The Daily Signal publishes a variety of perspectives. Nothing written here is to be construed as representing the views of The Heritage Foundation.
Have an opinion about this article? To sound off, please email letters@DailySignal.com and we will consider publishing your remarks in our regular “We Hear You” feature.