New Biden Climate Policies May Face Strong Legal Headwinds
Although the United States is rejoining the Paris Agreement, the ability of the Biden administration to discharge its obligations under the pact are heavily circumscribed. Even though Democrats controlled both houses of Congress during the first two years of the Obama presidency, bespoke cap-and-trade legislation stalled in the Senate, forcing the Obama administration to use agency rulemaking instead.
If politics create a forbidding hurdle to any future congressional action on climate, it is paradoxically the science—and the indeterminacy of court-deference doctrines under the Administrative Procedure Act—that could defeat agency action. Under the Administrative Procedure Act, regulations get quashed if they are found to be “arbitrary and capricious.”
To survive challenge in federal court, a regulation must be based on a “record” sound enough to demonstrate that the agency took a “hard look” at all the relevant evidence and came to a reasoned conclusion. Though courts tend to be deferential to an agency’s scientific assessments, it doesn’t take much of a mistake or omission for a court to find that the agency’s “look” was not quite hard enough to pass muster.
Thus the fate of any agency action on climate could come down to which way a federal court decides to load the dice on deference in disposing of the inevitable lawsuits. The Biden administration may not be able to comply with its renewed Paris commitments—even if it wants to.
Global warming caused by anthropogenerated emissions of carbon dioxide (CO2) and other less consequential greenhouse gases is real and observable. But what is more important is the amount of future warming that can be expected. The imprecision of climate forecasts may have something to do with why no significant federal legislation has been passed to restrict emissions. In 2016, the Supreme Court put a stay on the Obama administration’s expansive Clean Power Plan because it lacked legislative backing. When Congress did attempt legislation, the 2009 Waxman-Markey cap-and-trade bill narrowly passed the House but never made it onto the Senate floor, despite a large Democratic plurality. Cap-and-trade largely cost the party 64 House seats and its majority in the 2010 congressional elections.
In 2009, the Obama administration EPA issued an “Endangerment Finding” from CO2 and other human emissions. The evidential basis backing that finding was limited solely to General Circulation Climate Models (GCMs). All the GCM models, save one, proved incapable of simulating the three-dimensional behavior of the lower atmosphere, and the one that did work, from the Russian Institute for Numerical Modelling, predicts less future warming than all the others. Best scientific practice would emphasize this model, but it also would stunt or reverse any expensive policies because the future warming is so small and distant.
The Social Cost of Carbon (SCC) was a concept used by the Obama administration to justify sweeping policies, but the administration failed to follow OMB guidelines in the use of discount rates. Using warming suggested by both the Russian model and observationally based calculations of future warming, along with enhanced planetary greening caused by increasing atmospheric CO2, the SCC becomes negative (i.e., a benefit) across all discount rates under consideration.
Politically, it will be very difficult for the Biden administration to pass significant global warming legislation. With an even more constitutionalist Supreme Court than the one that stayed Obama’s Clean Power Plan, it is highly likely that any sweeping actions absent legislation will be voided by the Court.
There is widespread agreement that the climate is changing, and that human activity is a driver of some of the warming that began in the late 1970s. However, it is difficult to reconcile a warming of the early 20th century with CO2 changes, as the rate of warming then was statistically similar to the recent rise, but the concentration of atmospheric CO2 had barely risen at its inception.
At this point, the persistent difficulty of quantifying the actual impact of carbon emissions on global temperatures strikes many people as a purely academic exercise. The near-ubiquitous rationale for aggressive climate policies is: The climate is changing, and we know why, and if we don’t do something about it now, it will soon be too late to stave off a set of circumstances so undesirable that any rational analysis must point to rapid decarbonization at almost any cost.
This is clearly simplistic. It is also the approach taken by the Intergovernmental Panel on Climate Change (IPCC) in its 2018 1.5℃ Special Report. “Any comparison between 1.5℃ and higher levels of warming implies risk assessments and value judgments and cannot be straightforwardly reduced to a cost-benefit analysis,” the IPCC says. However, the IPCC is not a federal agency, and the courts would be unimpressed with such an airy dismissal.
The degree of uncertainty in our technical understanding of the climate is a crucial factor in getting the policy right. If costly climate measures are to be justified with appeals to scientific authority, then it’s particularly important to understand what the science is actually telling us.
This sensible approach is reflected in both of the main vehicles our democracy has at its disposal for policymaking. Legislation requires convincing broad swathes of the public that major changes in how we power our society are worthwhile. And action by administrative agencies must be based on a defensible technical record.
The start of the Obama administration saw a major push to pass a cap-and-trade scheme for taxing CO2 emissions. It passed the House of Representatives by a narrow 219-212 vote, but it was held in committee in the Senate and never made it to the floor. Then came the midterm elections of 2010, in which the Democrats lost 64 seats and their majority; most of the losses were sustained by members who voted for the bill. That ended the possibility of passing major climate legislation through Congress for the foreseeable future, which put the onus on President Obama to achieve carbon-reduction goals through regulation and executive orders.
At his 2010 postelection press conference, the first question to the president concerned the obvious and consequential repudiation of his global-warming policy by the electorate. His response was “there’s more than one way to skin a cat.”
The story told here shows how that went.
The Obama administration’s strategy for complying with what came to be the CO2 emissions targets of the 2015 Paris Agreement on Climate Change rested on a suite of Environmental Protection Agency regulations. These were empowered by the Agency’s 2009 finding of “endangerment” (hereafter, the Endangerment Finding) to human health and welfare from climate change induced by the emissions of CO2 and other greenhouse gases.
That 2009 finding traces back to the Supreme Court’s 2007 decision in the case of Massachusetts v. EPA, which held that CO2 is indeed a pollutant under the Clean Air Act Amendments of 1990, and that if EPA found that CO2 in vehicle emissions posed a danger to human health and welfare, then the agency was required to regulate it or explain why it did not require regulation—which, the Court held, the Bush-era EPA had failed to do adequately.
The regulations fostered by the Endangerment Finding included limits on CO2 emissions from both vehicles and stationary sources, and the related Clean Power Plan, which would have phased out coal and (soon after) natural gas in America’s power sector.
In February 2016, the Supreme Court whittled down the regulatory limits on carbon emissions by preemptively staying the Clean Power Plan on its prospective unconstitutionality. Then the Trump administration replaced the Clean Power Plan with a rule that enshrined natural gas as America’s primary source of electricity. But the root of the Obama EPA’s greenhouse-gas regulations—the Endangerment Finding and its predicate in Massachusetts v. EPA—remain untouched and will almost certainly be called into service again as the basis for greenhouse-gas regulations in the Biden administration.
The Endangerment Finding has not aged well. Its main conclusions rested on often-ambiguous data that have grown in some ways more ambiguous; and on assumptions and models that had potentially serious flaws at the time and whose flaws have become more apparent since, and oddly enough have not been repaired in subsequent model generations.
Given what we now know, it is open to question whether anything like the 2009 Endangerment Finding would survive judicial scrutiny today. The same is true of the scientific record of any new climate regulations, as those driven by the 2009 Endangerment Finding are still based upon critical flaws that have been uncovered in climate models (the sole sources of future climate guidance). It will be shown below that the only models that fit decades of three-dimensional data over the earth’s vast tropics are the ones with the lowest projected warming for this century—a warming so small that it is demonstrably a net benefit.
This article will also show that all the projections used by the EPA in its Endangerment Finding were manually “tuned” to yield an unrealistic fit to observed early 20th century temperatures. The evidence implies that any new agency regulations designed to drastically cut greenhouse-gas emissions must clear more rigorous judicial scrutiny than hitherto, especially as federal courts continue to refine their doctrines of “deference” to agency decision-making.
Mass. v EPA, The 2009 Endangerment Finding, and the Early National Climate Assessments
The majority opinion in the 2007 case Massachusetts v. EPA held that “pollutant” had a “capacious” meaning as used in the 1990 Clean Air Act Amendments. Writing for the majority, Justice John Paul Stevens said:
The Clean Air Act’s sweeping definition of “air pollutant” includes “any air pollution agent or combination of such agents, including any physical, chemical . . . substance or matter which is emitted into or otherwise enters the ambient air. . . .” On its face, the definition embraces all airborne compounds of whatever stripe, and underscores that intent through the repeated use of the word “any.” CO2, methane, nitrous oxide, and hydrofluorocarbons are without a doubt “physical [and] chemical . . . substance[s] which [are] emitted into . . . the ambient air.” The statute is unambiguous.
The Court rejected EPA’s contention that it had delayed its consideration of greenhouse-gas regulations because it wanted to wait for more and better climate data to come in. The wisdom of that contention will become apparent throughout this essay.
Instead, the Court held that EPA must determine whether greenhouse-gas emissions from new motor vehicles “cause or contribute to air pollution which may reasonably be anticipated to endanger public health or welfare, or whether the science is too uncertain to make a reasoned decision.”
In a vigorous dissent, Justice Scalia argued that the word “pollutant” in the Clean Air Act is a reference to emissions that harm human health, and necessarily excludes any major component of the air we breathe and that is vital to life on the planet. CO2 is the primary building block of photosynthetic life, and it is no surprise that as its concentration has increased in the atmosphere, the earth has experienced a profound greening of both forests and agriculture.
Massachusetts v. EPA emboldened the Obama EPA, which produced a “preliminary” finding of endangerment a mere 90 days into its first term and then a final finding on December 7, 2009 as a separate rule—an unusual move, given that all agency rulemaking must be based on an independent record—backed by a technical support document (TSD). In the Endangerment Finding, the EPA found that under section 202(a) of the Clean Air Act, greenhouse gases threaten public health and welfare, and that greenhouse-gas emissions from motor vehicles contribute to that threat.
These were actually two distinct findings. First, the EPA Administrator found that the combination of six well-mixed greenhouse gases—carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6)—threaten the public health and welfare of current and future generations and therefore constitutes “air pollution” under the Clean Air Act. Second, the Administrator found that the combined greenhouse-gas emissions from motor vehicles contribute to the atmospheric concentrations of these key greenhouse gases and hence to the threat of climate change. That set the stage for the “Tailpipe Rule” (limiting greenhouse-gas emissions from motor vehicles) and the suite of greenhouse-gas regulations that followed from it.
The Technical Support Document for the Endangerment Finding relied heavily on the second “National Assessment” of climate-change impacts on the U.S., published in 2009 by the consortium of 13 federal agencies that consumed or disbursed climate-change research funding. Based on this “evidence of record,” EPA found that greenhouse gases trap heat on the earth’s surface that would otherwise dissipate into space; that this “greenhouse effect” warms the climate; that human activity is contributing to increased atmospheric levels of greenhouse gases; and that the climate system is warming. As a result, the EPA Administrator judged that the “root cause” of recently observed climate change was “very likely” the observed increase in anthropogenic greenhouse-gas emissions.
EPA found that extreme weather events, changes in air quality, increases in food- and water-borne pathogens, and increases in temperatures resulting from climate change were likely to have adverse health effects. It concluded that climate change created risk to food production and agriculture, forestry, energy, infrastructure, ecosystems, wildlife, water resources, and coastal areas as a result of expected increase in sea level. EPA determined that motor-vehicle emissions of greenhouse gases contribute to climate change and thus to the endangerment of public health and welfare.
A coalition of states and industry groups challenged EPA’s greenhouse-gas regulations in a petition for review to the D.C. Circuit Court of Appeals, arguing that there was too much uncertainty in the data to support EPA’s judgment. In Coalition for Responsible Regulation v. EPA (2012), the D.C. Circuit disagreed and upheld the Endangerment Finding. The Court reasoned that the existence of some uncertainty did not on its own undermine the EPA action.
This also followed from the Supreme Court’s earlier holding in Massachusetts v. EPA, when the Court held that the existence of “some residual uncertainty” did not excuse EPA’s decision to decline to regulate greenhouse gases. To avoid regulating emissions of greenhouse gases, EPA would need to show “scientific uncertainty . . . so profound that it precludes EPA from making a reasoned judgment as to whether greenhouse gases contribute to global warming.” According to the D.C. Circuit, courts owed an “extreme degree of deference to the agency when it is evaluating scientific data within its technical expertise.”
And yet that “extreme deference” was notably absent in the Supreme Court’s ruling against the Bush EPA.
Lingering Questions on Climate Change
To support its Endangerment Finding, the EPA’s Technical Support Document ultimately relied heavily on General Circulation Climate Models, or GCMs. These and their descendants (Earth System Models) are at the heart of all four “National Assessments” that have been published to date. With regard to the Endangerment Finding, the second (2009) “National Assessment” of climate change effects on the U.S. is the document of record. It in turn relied heavily on the UN’s Fourth (2007) “Scientific Assessment” of climate change [“AR4”], which again was heavily GCM-dependent. Critical systematic problems in these models were not known of (at least to the general scientific community) at the time of the Endangerment Finding.
Relying principally on AR4, EPA cited evidence that warming since the mid-20th century (which actually didn’t begin until 1976, a quarter-century after the nominal 1950 midpoint) that the observed pattern of warming —a warming of the lowest layer of the atmosphere (the troposphere) and cooling immediately above it (the stratosphere)—was consistent with predictions of climate change from anthropogenic greenhouse-gas emissions.
Yet in AR4, the IPCC claimed nowhere near the EPA’s level of confidence in the particulars. In AR4, IPCC reported that the principal factors that drive the Earth’s climate system are the following: the sun; albedo effects, including from clouds; the climate’s response to “external forcings” such as aerosols, volcanoes, and emissions from manmade sources; and, crucially, greenhouse gases.
With respect to both the sun and albedo effects, the IPCC conceded a “low” level of scientific understanding and a lack of consensus about the overall effect on recent climate. With respect to “external forcings,” AR4 acknowledged scientists’ limited understanding of the crucial variable of “equilibrium climate sensitivity” [ECS], expressed as how many degrees Celsius global temperatures will ultimately rise if atmospheric CO2 doubles. A reliable figure for ECS continues to elude scientists’ grasp. An ECS of 1.5⁰C obviates the case for sweeping reductions in CO2 emissions, as noted later in the section on the Social Cost of Carbon. It is noteworthy that at that level of climate sensitivity, the midrange warming projections of the IPCC (which assume ECS values of 3.0⁰C) need to be halved, as the ECS is directly proportional to expected warming at any given point in time, such as 2100.
“Tuning” the Models and the End of Climate Science Objectivity
A stunningly candid article by reporter Paul Voosen appeared in Science two weeks before the 2016 presidential election.
The climate-modelling community, perhaps sensing an inevitable victory for Hillary Clinton in the upcoming election, may have reasoned that admitting to comprehensive flaws in their models would result in a shower of funds. This was new information had never been released. According to Voosen:
For years, climate scientists had been mum in public about their “secret sauce:” What happened in the models stayed in the models. The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models.
Indeed, that is precisely what this article does. Voosen elaborated:
[w]hether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held, a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modeling center, in Princeton.
In total, the 20th century temperature history can easily be summed up: a warming of nearly 0.5C from 1910-45, a slight cooling from then through the mid-1970s, followed by a warming from 1976 through 1998 that occurred at a similar rate to the earlier warming.
Tuning (meaning changing the models’ internal code to get an “anticipated acceptable” answer) to mimic the 1910-45 warming as a result of human activity (i.e., CO2 emissions, among others) is an exercise in the absurd. By 1910, when that warming began, the concentration of atmospheric CO2 had risen from an early 19th century value of around 285 parts per million (ppm) to around 298—a tiny increase, considering that today’s concentration is around 414 ppm. If the early warming was caused by such a small increase in CO2 concentration, it would be hotter than Hades by now.
Straightforward calculation, using generally accepted equations for the warming caused by CO2 and the cooling by simultaneous emissions of sulfate aerosols (mainly from coal combustion), show that the 1910 concentrations could have induced a global warming of only about 0.05⁰C; instead, through 1945 it was nearly 0.5⁰, an order-of-magnitude error. The observed value is far too small to be detected by any network of thermometers.
The news hook for Voosen’s article was a yet-to-be-published paper by Frederic Hourdin, head of the French modelling effort, and 13 coauthors, all from different modelling centers. This quote will forever tar the patina of the climate models of today and could also bring down the Endangerment Finding:
One can imagine changing a parameter [aka fudging the model] which is known to affect the sensitivity, keeping both this parameter and the ECS in the anticipated acceptable range . . . (emphasis added)
Therefore, it is the scientist, and not the science, that determines future warming. It is the scientist, subject to all the pressures of paradigm-keeping so eloquently written about by Thomas Kuhn in his famous Structure of Scientific Revolutions, that decides what is anticipated and acceptable. It is the scientist, working within the peer pressure of an institutional setting that would react negatively to a finding that fell below the “anticipated acceptable range,” who makes this decision. It should not surprise anyone that Hourdin et al. went further to show this was all kept hush-hush:
In fact, the tuning strategy was not even part of the required documentation in the CMIP5 simulations [the latest collection of climate models—see subsequent text] . . . why such a lack of transparency? Maybe because tuning is often seen as an unavoidable but dirty part of climate modelling. Some may be concerned that explaining that models are tuned could strengthen the arguments of those who question the validity of climate-change projections.
Which, indeed, is what this article does. Hourdin’s article will be thermonuclear to intrusive climate policies in the U.S., should they ever be proposed. In her popular scientific blog, Climate Etc., now-retired Georgia Tech scientist Judith Curry wrote, “If ever in your life you are to read one paper on climate modelling, this is the paper that you should read.”
In the next section, I point out that climate science hasn’t reduced the uncertainty of its projections for over four decades, but that low-sensitivity models, some of which aren’t traditional GCMs, come much closer to reality than the UN’s AR5 (2013) models, or the follow-ons that will appear in AR6 (2022), which are even worse.
The clustering of overly warm results should not be surprising inasmuch as any modeler who publishes far outside of the “anticipated acceptable range” would invite severe, and possibly career-cancelling, retribution from a very large community of faithful modelers. It’s beyond the scope of this article to document the vitriol heaped upon scientists with low-sensitivity models, but even Google will point the curious reader in the right direction.
Improving the Reliability of ECS Calculations
It is generally held that there has been no real narrowing of the range of prospective climate change since the 1979 National Academy of Sciences report Carbon Dioxide and Climate: A Scientific Assessment, chaired by Jule Charney of MIT. The “Charney Sensitivity,” as it came to be called, was in the range of 1.5-4.5⁰K (=⁰C) for the equilibrium lower atmospheric warming caused by a doubling of CO2. Subsequent assessments, such as some of the IPCC ARs, also listed the midpoint, 3⁰C, as a “most likely” value.
Periodically, the U.S. Department of Energy iterates what it calls “Coupled Model Intercomparison Projects (CMIPs). The one applicable to the most recent (2013) IPCC state-of-climate-science compendium, CMIP5, contained 32 families of models with a sensitivity range of 2.1-4.7⁰K, with a mean value of 3.4⁰—i.e., warmer lower and mean values than Charney. The IPCC then rounded this back to 1.5-4.5⁰K, the old Charney Sensitivity, but then noted no “most likely” warming, owing to the larger spread of the models.
As noted below, observationally based sensitivity calculations by Christy and McNider (2017) and Lewis and Curry (2018), derived from empirical data rather than theoretical computer models, yield ECS values between 1.4 and 1.6⁰K, which broadens the actual range to be quite close to the Charney Sensitivity.
Unfortunately, use of the observationally based sensitivities along with the new CMIP6 model suite (which will be in the next (2022) IPCC report) will result in a larger range of the ECS. The range of the CMIP6 models currently available (and almost all are now available), is 1.8-5.6⁰C, and an estimate of the mean (based upon a nearly complete model set) is 4.0⁰C.
In summary, climate science suffers from the oddity that the more funding is expended, the less precise outcomes ensue. Models at the bottom of the range that are anchored on observations (with one exception noted below) produce low ECS values, while the enormous numbers of nonworking warmer models reflect climate scientists’ desire to produce results falling within an “anticipated acceptable range,” to quote Hourdin.
Systematic Problems with the National Assessments
The quadrennial National Assessments of climate change effects on the U.S. are mandated by the Global Change Research Act of 1990.
As noted earlier, the Endangerment Finding’s Technical Support Document (TSD) relies heavily on the second “national assessment” (NA-2) of climate change impacts in the United States from 13 federal agencies, now called the U.S. Global Change Research Program (USGCRP).
NA-2 uses a large number of GCM simulations of global climate with enhanced CO2 and other human emissions. The output of these models is then used to drive the so-called “impact” or effects models. These apply to many aspects of American life, projecting changes in migration, death, nutrition, and mental illness, and many other consequences.
The most cited document in the TSD for the 2009 Endangerment Finding is NA-2. It has since become clear that the climate models used in that report contained serious systematic errors in their assessments of the tropical atmosphere.
In our peer review of the NA-2 draft, Paul C. Knappenberger and I wrote (in part):
Of all of the “consensus” government or intergovernmental documents of this genre that [we] have reviewed in [our] 30+ years in this profession, there is no doubt that this is the absolute worst of all. Virtually every sentence can be contested or does not represent a complete survey of a relevant literature. […] Additionally, there is absolutely no effort made by the CCSP [Climate Change Science Program] authors to include any dissenting opinion to their declarative statements, despite the peer-reviewed literature being full of legitimate and applicable reports that provide contrasting findings.
The progenitor of NA-2 was NA-1, published in 2000. The design of the NA-1 was similar to the succeeding three Assessments. Future climate was generated by GCMs. At the time NA-1 was under development, the National Assessment Synthesis Team had nine GCMs to choose from to forecast 21st century climate. They chose two: The Canadian Climate Centre model, which produced the greatest 21st century temperature changes over the U.S. of all nine, and the model from the U.K. Hadley Center, a part of the Meteorological Office, which produced the largest precipitation changes to 2100. That a team of federally-supported climate scientists would purposefully and consequentially choose to employ the most extreme models greatly harmed the credibility of NA-1.
In researching my peer review of the draft NA-1, I tasked the two models to do the simplest task: simulate ten-year running means of coterminous U.S. temperature averages over the 20th century. They couldn’t do it. The answers the models gave were worse than simply assuming the 20th century average value (i.e., no model). In other words, the models added errors to the raw data. What they did was exactly analogous to a student taking a four-option multiple-choice test and getting less than 25% correct.
I emailed Tom Karl, then director of the National Climatic Data Center and one of three supervisory scientists for the NA-1. He said that I was correct, and that his team looked at more time intervals than I did, and in each case found what I did. So the NA-1 science team published a document with significant regulatory implications that was known to be fatally flawed at its heart. The fact that the models produced worse results than a table of random numbers and were still used in such a policy-influencing document is malpractice.
Trouble in the Troposphere
Models continued to proliferate in each quadrennial National Assessment and “scientific assessments” from the UN’s Intergovernmental Panel on Climate Change (IPCC) every six years. The most recent of the IPCC reports, AR-5, is from 2013, and its suite of models is readily available online at KNMI Climate Explorer.
John Christy, who published the first satellite-sensed global temperature record in 1990, compared the output of the 32 groups of AR-5 models to temperature observations in the middle troposphere (approximately 5,000-30,000 feet) over the earth’s tropics.
Understanding the future behavior of the tropical troposphere is crucial to any confident assessment of potentially serious effects of climate change. The tropical ocean is the source of 90% of the moisture that falls on America’s farmland east of the Rockies, one of the most productive agricultural regions on earth. The amount of moisture that ascends into the atmosphere from the tropical oceans is determined by the temperature contrast between the surface and upper layers. The greater the contrast, the more buoyant the surface air, and the more moisture enters the atmosphere.
With one exception, all the models in AR-5 failed. Figure 1 shows results of the models compared to three independent sets of observations: temperatures sensed by satellites, data from weather balloons, and the relatively new “reanalysis” data that infills data gaps with a physical model. The plots begin in 1979 because that is when the satellite data begins.
Over a moist surface, the vast majority of incoming solar radiation goes toward evaporating water rather than directly heating the surface. If the input of moisture forecast (i.e. precipitation) is not reliable (thanks to the vertical errors in the tropics), then average forecasts of temperature will also be problematic. The observed data (circles and squares) indicate that the models are overpredicting by several times the warming rate being observed at this level.
The IPCC models predict what is often called an “upper tropospheric hot spot” with a substantially enhanced warming rate compared to layers above and beneath. The striking differences in recent decadal warming rates between the climate models and observations is obvious in Figure 2.
Close inspection of Figures 1 and 2 reveals that there is one model that tracks reality: the Russian Institute for Numerical Modelling model INM-CM4. It has the least prospective warming of all of the AR-5 models for the 21st century. Its estimate of Equilibrium Climate Sensitivity is also the lowest of all of these models, at 2.05C, compared to the average of 3.4C given in the IPCC model group.
The Key Question of Equilibrium Climate Sensitivity
AR5 makes a carefully qualified attribution statement: “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.”
“Half” of the observed warming in that period is about 0.3⁰C. That’s the lower limit of human-caused warming, because (as demonstrated earlier) the sharp early 20th-century warming had little to do with anthropogenerated CO2 emissions.
The next UN IPCC report (AR-6) has been delayed to 2022, but its underlying climate models are becoming available. The new Russian model, INM-CM4.8, has even less sensitivity to a doubling of CO2 than its predecessor—down to 1.8⁰C. Its predecessor (INM-CM4), as shown in Figures 1 and 2, was the best of all (by far) in simulating the evolution of tropical temperatures at altitude. It seems likely that the new version will be equally effective. It also predicts less warming than all the other models.
On the other hand, CMIP6 predicts more warming because several of the model ensemble members are warmer than the hottest model in CMIP5. Fortunately, it appears that the warmer a CMIP6 model is, the larger is its over-forecast of recent decades and current warming. One has to ask if the designers of these overheated models think that observed data is just an irrelevant burden.
Climate Policy Needs to Employ Scientific Best Practices
In professional weather forecasting, meteorologists do not take all of the available models and average them up to determine tomorrow’s temperature or snowfall. Rather, they look at the current weather situation and determine which model or models is likely to be more accurate than others and use that to base the forecast.
The same should be true in climate forecasting. Yet the TSD for the Endangerment Finding relied upon a clearly failing family of models, and the most recent (Fourth) National Assessment pays no special attention to INM-CM4, instead relying upon the average and the spread of the entire model suite.
Clearly the best scientific practice is to use the climate models that work: the Russian INM-CM4.8 (or the newer 5.0 that is in the CMIP6 suite) and the observationally based calculations of Christy and Mcnider (2017) or Lewis and Curry (2018).
The Social Cost of Carbon: Could It Be Negative?
An intriguing possibility is that using these models will create a Social Cost of Carbon that is negative (i.e., a benefit). As a basis for policies subsequent to the Endangerment Finding, the Obama administration introduced this concept to help clarify the risk side of the risk-benefit calculus. That itself was risky. Given empirically and observationally derived equilibrium climate sensitivities as well as the one GCM family that worked, combined with the agricultural and forest greening caused by increasing atmospheric CO2, the SCC becomes a net benefit.
In the Obama administration, the SCC was calculated by federal interagency working groups (IWGs). The IWGs had some discretion as to the selection of discount rates (estimates of the future value of invested money) and timeframes. As early as 2003, in the Bush administration, the Office of Management and Budget indicated that models for future economic impacts of certain regulation should use, among others, a 7% discount rate, which represents the robust long-term trend in securities investments since at least 1900.
But the Obama administration used lower discount rates—lower than an estimate of future economic growth than historical trends would suggest. Why? It is apparent from the administration’s results for 3% and 5% discount rates that the SCC would be negative (a net benefit) far into the future with a 7% discount rate—that is, had the Obama administration followed the OMB guidelines, CO2 emissions could arguably confer a net benefit.
With regard to climate change, the Obama administration used a 3.0⁰C sensitivity to doubling CO2, and a probability distribution about this figure calculated by Roe and Baker in 2007. This GCM-guided estimate should be regarded with the same (low) confidence afforded both the AR-4 and AR-5 models because of their systematic errors in the tropics (See Figures 1 and 2). They don’t work, so it is malpractice to use them in support of the Endangerment Finding, and it is best practice to use the ones that do, even if they may not support Endangerment.
More recent climate projections based upon real-world observations (rather than projections) of changes in atmospheric composition and temperature changes tend to produce lower warming than the AR-5 models. The most recently noted of these is by Lewis and Curry (2018), with a sensitivity of 1.5-1.6⁰ (depending upon the temperature record used) and the satellite-based study of Christy and McNider (2017), with a sensitivity of 1.4⁰C.
The Obama administration used three Integrated Assessment Models, but only one of these (FUND—for “Framework for Uncertainty, Negotiation and Distribution”) had explicit multipliers for agricultural productivity enhanced by atmospheric CO2. All three models used the UN’s A1B (“midrange”) emissions scenario, and discount rates below what has been observed systematically beginning in 1900.
Dayaratna et al. in a 2020 article in the peer-reviewed journal Environmental Economics and Policy Studies used realistic discount rates, an enhanced vegetation response to increasing atmospheric CO2, and two lower-sensitivity models and found the Social Cost of Carbon to be negative (i.e., a net benefit) at least well into the second half of this century for all discount rates from 3% to 7%. That should not be surprising given the increases in life expectancy and personal wealth over the past 125 years. Those salutary trends were built upon a fossil-fuel-burning (and CO2-emitting) economy.
The challenge faced by the Biden administration is that policy goalposts have moved and become substantially more aggressive. Article 2 of the Paris Agreement speaks of achieving a balance between anthropogenic emissions and sinks “in the second half of this century.” On the basis of the IPCC 1.5℃ Special Report, this goal has been brought forward to 2050, without, as noted above, any cost-benefit analysis justifying the shift.
However, the costs of Net Zero by 2050 are extremely high. The 1.5℃ Special Report includes estimates of the shadow cost of carbon for Net Zero in a 2050 emissions trajectory. For 2030, these range from between 1.4 times the Obama administration’s estimate of the Social Cost of Carbon to 64 times, indicating the failure of putative policy benefits to outweigh costs by a very considerable amount. Similarly, Bjorn Lomborg in a 2020 paper estimates that each $1 of cost yields only 11 cents of climate benefits. Narrowing such a huge gap between cost and benefit will require a great deal of creativity and imagination from the Interagency Working Group—and the resulting effort quite likely will be based on implausible and unreasonable assumptions.
Should Courts Defer to the New Administration’s Climate Actions?
Relying on IPCC reports that were considerably more guarded than the claims EPA made in the Preamble to its Endangerment Finding was a considerable risk.
But the federal judiciary has developed a convoluted and often unpredictable patchwork of deference doctrines for weighing agency determinations, and some lawyers think that it is merely a matter of political preference, expressed as “consequentialism,” that determines how a court majority will load the deference dice. During the Obama years, the judiciary was still heavily progressive, as it had been since the New Deal, so Democrats could expect favorable rulings on deference. And in the D.C. Circuit, that’s precisely what they got.
Still, as law professor Catherine Sharkey has written, “Even if a court is ill-equipped to evaluate details of an agency’s scientific evidence, hard-look substantive review demands that an agency supply generalist judges with reasoned explanations backed by sufficient scientific references.”
In the 2014 case, Utility Air Regulatory Group v. EPA, Justice Scalia, writing for the majority, noted that:
When an agency claims to discover in a long-extant statute an unheralded power to regulate a significant portion of the American economy, we typically greet its announcement with a measure of skepticism. We expect Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.
This was a warning to the EPA that if it tried anything big (such as the Clean Power Plan), the Court would “typically greet its announcement with a measure of skepticism” if there was no specific legislative backing. This is why the Supreme Court stayed the Clean Power Plan in February 2016 at the behest of Justice Scalia (his last judicial act before his sudden death).
The Scalia ruling is a signal that the Supreme Court may be more skeptical of new EPA climate regulations than courts have been in the past, and the current nominal 6-3 split favoring constitutionalists only makes that case stronger. If the Court considers the new regulation a “major question,” it may find that Congress did not “speak clearly” enough for EPA to stretch its delegated authority enough to sustain a rule or to be given the near-blanket deference that EPA has enjoyed from courts in the past. The Court may find that a new EPA rule did not take the requisite “hard look” at the nature of the lingering uncertainties and the failure of climate models to align with climate data.
The Trump administration’s decision to withdraw from the Paris Agreement on Climate Change was lamented around the world, because no global response to climate change is likely to succeed without buy-in from the United States. Yet, over the last decade and a half, America has reduced carbon emissions more than all of Europe put together. How it has done so, however—through a widespread switch from coal to natural gas made possible by hydraulic fracturing—is cold comfort for environmentalists, for it only seems to entrench the continued primacy of fossil fuels.
But America was already unlikely to be able to meet the Obama-set Paris targets regardless of what presidential administration was in charge. The years ahead will test this hypothesis.
At the same time, it is clear that the low-end climate models are much more accurate, even if they do not support extreme policies. These were not available for the original 2009 Endangerment Finding.
There will clearly be those with legal standing to question extreme policy responses and the economic losses sure to follow. The subsequent major question is whether our judiciary will recognize that federal agencies will have to ignore a remarkable amount of subsequent science in order to support such policies.
It turns out that the general understanding of climate expressed in 2007 by the Bush EPA in response to Mass v. EPA was correct at the time: EPA needed more science before it could reach a reasoned conclusion on whether and how to regulate carbon emissions.
It has that information now—and that information no longer supports an Endangerment Finding from CO2.
Patrick J. Michaels is Senior Fellow at the CO2 Coalition and Senior Fellow in Energy and Environment with the Competitive Enterprise Institute.
This article appeared on the RealClear Energy website at https://www.realclearenergy.org/articles/2021/04/23/new_biden_climate_policies_may_face_strong_legal_headwinds_774020.html]]>