Climate Statistics 101: See the Slide Show AOC Tried, and Failed, to Censor

[embeddoc url=”https://co2coalition.org/wp-content/uploads/2020/06/LibertyCon-Rossiter-Presentation-final.pptx” viewer=”microsoft”]

This is the slide show and 20-minute talk that Representatives Alexandria Ocasio-Cortez and Chellie Pingree tried to censor at the LibertyCon 2020 conference in Washington, D.C. After Dr. Rossiter gave a climate talk at LibertyCon 2019, they wrote to sponsors of the event, such as Google and Facebook, and asked them not to fund any event with an appearance by “climate deniers” from the CO2 Coalition. See https://co2coalition.org/2019/01/30/representatives-ocasio-cortez-and-pingree-and-climate-change-debate/

LibertyCon indeed lost some sponsorship, but because of its commitment to the free exchange of ideas still invited Dr. Rossiter back to speak in 2020. This is the talk he had prepared, before the coronavirus crisis forced the cancellation of the conference.

As background to this topic, we suggest that you watch the CO2 Coalition’s “CO2-Minute” video, “Carbon Dioxide: Part of a Greener Future,” at https://co2coalition.org/studies-resources/video-and-media/.

Now, on to the talk! (You can also download and distribute the slides themselves in a PowerPoint file at: https://co2coalition.org/wp-content/uploads/2020/06/LibertyCon-Rossiter-Presentation-final_6-16-20.pptx)

Slide 1

I’m Caleb Rossiter, executive director of the CO2 Coalition of climate scientists, and a former statistics professor. Welcome to Climate Statistics 101, which shows how to test hypotheses about the impact of emissions of greenhouse gases like CO2.

Statistics uses logic and probability to test for causation, for whether one thing affects another. We take nothing on faith, everything on proof. Only in the law school do they teach ad homimen arguments – attacking or praising the messengers. Scholars just analyze their message.

Slide 2 – Normal Curve

This is life! It’s called the Normal Distribution or Bell Curve. It shows how far away from the average most physical and statistical things are. Things like people’s heights or the number of hurricanes in a decade.

We use the Normal Distribution to test the null hypothesis, the claim that there is no “statistically significant” difference between the average and what we actually observe. Most of the time, 68 percent of the time, observations are close to the average, within one standard deviation – the average distance of the data from the average itself. As you move farther from the average, you get less of whatever it is you are counting. There are a lot more six-foot guys than seven-foot guys.

Slide 2A

This formula, derived from our mathematics and amazingly confirmed in nature, determines the height of the Normal curve at every point. It tells us just how often what we observe will be, simply by chance, a certain number of standard deviations away from the average.

Slide 2B

This “Z-table” tells you exactly, to the third decimal place, how likely it is that an observation happened by chance. When we run an experiment, we only reject the null hypothesis, and say there is a statistically significant correlation, if the outcome would happened anyway one time out of 20, or five percent of the time. That makes us 95 percent sure that the two variables are correlated, or move together.

Slide 2C

Now, correlation is not necessarily causation. Life is not bivariate – based on just the two things you are measuring. Unless you can randomly assign subjects, life is multivariate, with other variables causing changes too. This is the most common error in public policy. In Latin it’s called post hoc ergo propter hoc: this thing happened after that thing, so it was caused by it. Here’s an example.

Slide 3 – Scouting and Delinquency

Does being a Boy Scout keep you out of trouble? Quick, hold a press conference: only nine percent of scouts are delinquents, versus 15 percent of non-scouts.

Slide 3A

The probability of getting such a big difference from the null expectation of no difference at 12 percent is …

Slide 3B

… less than one percent. You can see that in the column labeled “Probability?” 0.009 is less than 0.01, which is one percent. Scouting works!

Slide 4 – Subgroups

Now let’s control for another important variable in life, a family’s income level. But this slide shows that there is no difference in the low-income families in delinquency rates for Scouts and non-Scouts; both are at 20 percent.

Slide 4A

And in middle-income families, again no difference, at 12 percent. I guess all the difference in delinquency must come from the high-income families.

Slide 4B

Huh? No difference here either, with both groups at four percent delinquency.

Slide 4C

So, oops, the correlation disappears when controlling for income, which unlike scouting, is truly correlated with arrests.

Slide 5 – Trends in Crime

Another way that correlation is confused with causation is in trend lines. You see the drop in violent crime in a city. The mayor, of course, calls a press conference to take credit.

Slide 5A

Everybody has their own explanation.

Slide 5B

But they agree something caused the drop to happen.

Slide 5C

A rising trend line seems to say so.

Slide 5D

But a flat trend line …

Slide 5E

… or a falling line would seem to indicate that it’s all just random fluctuations. Cancel the press conference.

But the point here is that NONE of these things we are seeing with our eyes have ANY proof in them. A simple bivariate chart is inherently misleading. All these graphs are actually worse than useless, because they trick people into thinking they aren’t!

Slide 6 – Sea Levels

Let’s apply what we’ve learned to “climate change.” Take sea-level. There weren’t enough emissions for CO2 to be a factor until 1950, so we compare the rate of sea-level rise before and after 1950, and see if it has increased.

But the United Nations International Panel on Climate Change agrees that the difference in the two slopes isn’t statistically significant. No “climate change.” [See Testimony of Caleb S. Rossiter, Ph.D. before the Subcommittee on the Environment of the House Committee on Oversight and Reform]

Slide 7 – Sea Level by Presidents

Here’s a fun way to look at the same sort of data: sea-level has actually been rising since the 1800’s after the Little Ice Age ended, at the same rate for all sorts of presidents and levels of emissions!

Slide 8 – Hurricanes

How about hurricanes? If you just look at the trend from 1970 to 2010, you’d see a rise. But from 1940 to 2010, you’d see a drop. And from 1850 to 2010, you’d see no trend at all. It’s the same for floods, wildfires, and droughts: no long-term, statistically significant increases from the CO2 effect. “Detection and Attribution” studies claim to detect a rise in some extreme weather variable like hurricanes and then they attribute that rise to increased temperature from human activity. These studies lie in the realm of politics, not science, because there’s no way to tell if the increase in temperature was natural or based on industrial emissions.

Slide 9 – Global Mean Surface Temperature

And speaking of temperature, here is an iconic but misleading UN IPCC graph. It shows the average change in temperature at ground stations, along with uncertainty and a long-term trend line, in blue.

There’s a half degree rise from 1910 to 1940, a flat period until 1980, and then another half degree rise to 2010. With CO2 levels barely rising until 1950, and then zooming up since then, that’s a lot of variation that’s not explained by CO2 emissions. Chaos, natural fluctuation, and unknown or hard to quantify cycles are all part of this picture.

What’s so misleading? First, it’s hard to estimate a global or even local average temperature in tenths of a degree. You can see that by the uncertainty, which itself is a guess. Ground temperature stations are problematic, not just in 1880 but today. Second, the data from decades ago are constantly being adjusted with new rules to show more rise.

Slide 9A

We really should be looking at global surface temperature today, let alone in 1900, on a scale of degrees rather than tenths, like here. These are the exact same data. Hard to see a trend at all.

Slide 9B

Satellite and balloon readings of the troposphere are much more credible than the surface data, but they have only been gathered since 1980, so we can’t use them for longer trends.

Slide 10 – Warm days in Ohio

Here’s a typical temperature trick, courtesy of Tony Heller. The number of days per year over 90 degrees in this town has been decreasing from 1890 to 2017. Tony shows us how to reverse that.

Slide 10A

He moves the start until you can declare an increase! At 1955 you get the “climate change” graph you need. This sort of misleading shopping for a start date is often done in UN and U.S. climate reports. [See U.S. Government Climate Science vs. U. S. Government Climate Crisis]

Slide 11 – CO2 and Temperature

Here’s a famous UN slide of carbon dioxide and Antarctic temperature from ice cores.

Slide 11A

Well, the slide became infamous when Albert Gore Jr. gave us “correlation means causation”at its worst. Vice President Gore says CO2 drives temperature, but it’s mostly the other way around: lengthy cycles in earth’s orbit change the Sun’s impact and drive temperature, which drives CO2. Gore hops on a riser to convince us that as CO2 keeps going up from industrial emissions, it will drag temperature along.

Slide 11B

That prediction is already false, since temperature has barely budged on this scale.

Slide 12 – A Thousand Years of Temperature

Now, is it hotter now than any time in the past 1,000 years? It’s a silly question, because of minimal coverage for old data. But even if the answer were to be yes, it wouldn’t prove anything about what caused the recent rise. We have a lot of the “hottest years on record” recently only because the record is just one hundred years old and we happen to be in a period of slight natural warming. That started in about 1800, well before the CO2 era. Of course, during warming more recent years tend to be hotter than earlier years!

This graph appeared in the first UN report on climate change, in 1990. It represents a reconstruction by climate historians from diaries and proxies of what was roughly happening to the globe. It goes up about a degree in a Medieval Warm Period and down about a degree in the Little Ice Age, and then back up again as that ended, all from natural causes. Many in the public claimed that the image was embarrassing to the “climate change” narrative.

Slide 12A

Rather than try to educate the public about why this bivariate graph has no bearing whatsoever on what has caused our recent multivariate warming, the climate change establishment decided that the graph had to go, and in 2001’s UN report, it did.

Slide 12B

In place of Medieval warming we got a “hockey stick” with the blade rising only in the industrial era. Temperature, it seems, was naturally constant until bad fossil fuels came along.

Now, for all the creative math used in creating this chart from a few tree rings, it’s no better or more certain than the previous, hand-drawn one.

Slide 12C

And I’ll show you why.

Slide 12D

A key UN proxy set estimated by a researcher named Briffa shows that in recent years, temperatures calculated from tree rings go down, while we know temperature was rising. Rather than rethink using tree rings at all, the UN crowd – as revealed by their own emails in the Climategate scandal – just threw out the recent data to “hide the decline.”

We have a saying about complex calculations that rely on uncertain data: Garbage in, garbage out.

Slide 13 – Climate Models

Speaking of problems with data and calculations, let’s end with the mathematical computer models that drive the debate about dangerous warming. Global Climate Models are based on the General Circulation Models that predict local weather conditions on your TV every night. Weather models start with excellent local data on conditions and look at probabilities based on previous, actual weather in such conditions. For a few days then they can make an educated local guess.

The global models, though, use average data for very large blocks of air, land, and sea, then add in carbon dioxide emissions and run for decades into the future, where there can be no comparison to actual conditions. The models also use thousands of estimates, called parameters, to represent with just one number the effect of many complex and chaotic physical processes, like the Hadley cells, wind systems that move heat from the lower latitudes to the poles.

Legendary physicist Freeman Dyson dismissed such parameters as “fudge factors.” They’re all just guesses, like the Medieval warming graph, which are twiddled and tweaked until the temperature output in past decades is “tuned” to match the surface temperature record then.

Now, of course, the surface record is going to be wrong at the start – it’s a rough estimate itself – and the parameters are also going to be wrong as well– they’re guesses. What a mess! And then the true parameters will change both cyclically and chaotically as the model is run into the future, but the modeled parameters will not. Yikes!

The crucial output of these models is an estimate of “climate sensitivity” – the degrees of warming we’ll get from a doubling of CO2 – but that is completely determined by the modeler’s choice of the input parameter for how powerful CO2 molecules are at warming! Crazy … but true. Like the hockey stick, this turns out to be a waste of time. Again, how do we know?

Well, first we can test these models against their own projections, and these consistently run about three times too hot over the past 30 years. So the modelers constantly have to “retune” the parameters and project the future all over again.

But this graph here can’t be tuned away, because it tests even the up-to-date surface models against their projections in the troposphere, up to about 40,000 feet, where they can be checked by the far better satellite and balloon readings. Take a look: the models’ projections for the troposphere, the thick red line, also run three times too hot compared to actual temperatures, the line purple line, right now, without even waiting for the future.

Slide 14 – The Elephant Paper

You can understand models’ weaknesses from a cautionary tale about an elephant. John von Neumann was a legendary mathematician and atomic bomb maker who tried to build a climate model after World War II. He wanted to use it as a weapon, to create a drought in the Soviet Union.

When von Neumann gave up, he laughed that: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” Recently, though, three mathematicians wrote this paper showing how it could be done with functions of complex numbers. See https://publications.mpi-cbg.de/Mayer_2010_4314.pdf

Slide 15 – The Elephant

The first four functions draw the elephant on the left. Then, in the graph on the right, a fifth parameter is added and adjusted, giving us some different placements of the trunk…it wiggles, as required! The point here is that mathematical climate models are controlled by their thousands of convenient choices of parameters, and you can make those parameters do anything you want. And because models can’t be tested statistically, all we are left with again is art, not science.

Slide 16 – Questions

Well, this is Professor Caleb Rossiter, and I hope you have thought a lot and learned at least a little with this lecture on Climate Statistics 101. Please feel free to contact me with any questions or comments, at info@co2coalition.org.


Subscribe to Our Informative Weekly Newsletter Here:

  • This field is for validation purposes and should be left unchanged.