trackingpixel
 
06.22.2021

Global Warming is happening, what does it mean?

By Andy May

The concepts and data used to make temperature and climate reconstructions, or estimates, are constantly evolving. Currently, there are over 100,000 global weather stations on land and over 4,500 Argo floats and weather buoys at sea. This is in addition to regular measurements by satellites and ships at sea. The measurement locations are known accurately, the date and time of each measurement is known, and the instruments are mostly accurate to ±0.5°C or better. Thus, we can calculate a reasonable global average surface temperature. However, the farther we go into the past the fewer measurements we have. Prior to 2005, the sea-surface measurements deteriorate quickly and prior to 1950 the land-based weather station network is quite poor, especially in the Southern Hemisphere. Before 1850, the coverage is so poor as to be unusable for estimating a global average temperature. Prior to 1714 the calibrated thermometer had not even been invented; the world had to wait for Gabriel Fahrenheit.

Is the global average temperature a useful climate metric? How do we compare today’s global or hemispheric temperature to the past? Modern instruments covering the globe have only been in place since 2005, plus or minus a few years. If we have accurate measurements since 2005, how do we compare them to global temperatures hundreds or thousands of years ago? The world has warmed 0.8°C since 1950 and 0.3°C since 2005, that doesn’t sound very scary. This two-part series will investigate these questions. We will propose a solution to the problem in a second post that should be up in a day or two.

Attempts to construct hemispheric or global temperature records that extend back 1,000 or more years are unconvincing so far. These statistical reconstructions combine various temperature proxies, such as tree ring measurements, oxygen isotope measurements from ice cores, Mg/Ca ratios in fossil shells, various organic paleothermometers, like Tex86, borehole temperature surveys, or other temperature related measurements from lake or ocean floor sediment cores, into a single temperature record of the past. There are numerous problems comparing these records to the present global temperature record.

  1. All present temperature records, at least since 2005, are to an objective standard temperature, they are daily, well-timed measurements, and the location and elevation of each measurement are known precisely. The global coverage on land, sea, and from satellites is good.
  2. The various proxies used in the reconstructions all have biases and their relationship to surface temperature is often seasonal. Summer temperatures often change at a different rate than winter temperatures.
  3. All proxies have other influences that affect them, tree rings are affected by precipitation, wind speed, and CO2, for example (National Research Council, 2006, pp. 45-52). Ice core records are affected by elevation, cloud height, and precipitation rates (Vinther, et al., 2009). Proxies lose temporal and temperature accuracy with time.
  4. The statistical methods used to create the temperature reconstructions are inadequate for their purpose and flawed (Wegman, Scott, & Said, 2010). The methods produce a quantitative result, but they do not “guarantee physical meaning” or “physical reality” (Soon, Baliunas, Idso, Idso, & Legates, 2003b).

Longer-term natural climatic changes affect Earth by latitude; thus, the Southern Hemisphere is often out of sync with the Northern Hemisphere, further the mid-latitudes in the Northern Hemisphere (30°N to 60°N) have more temperature extremes than the rest of the earth, due to the concentration of land area there, as shown in Figure 1.

Figure 1. Multi-proxy Reconstructions of global temperature by latitude. Antarctic (90S to 60S), Southern Hemisphere (SH, 60S-30S), Tropics (30S-30N), Northern Hemisphere (NH, 30N to 60N), Arctic (60N-90N). Source: May, 2017

Bo Christiansen and Fredrik Ljungqvist (Christiansen & Ljungqvist, 2011) have shown that the various spatial regression techniques used in the past, generally some form of principal component analysis, significantly suppress both long-term and short-term temperature variability. Their “LOC” method is similar in concept to the method used to make Figure 1, but more sophisticated. They produce local reconstructions at the site of each temperature proxy. Each proxy is tested for significance versus modern local instrumental temperatures and rejected if the proxy fails. Local reconstructions are performed and then averaged over the extra-tropical (>30°N) Northern Hemisphere. By keeping the process simple, more variability is preserved, but even these reconstructions do not have the variability of modern temperature records and cannot be directly compared to them, the proxy locations are too sparse. Further, all local proxies are calibrated with modern instrumental temperatures, there is no check on the validity of the calibration of pre-instrumental proxy derived temperatures.

Renee Hannon has compared the best ice core temperature proxies from Greenland and Antarctica and found a similar relationship to the one shown in Figure 1. Figure 2 compares the polar temperature records to the total insolation received at 65°N in June and 65°S latitude in December due to Earth’s orbital characteristics.

Figure 2. Ice core derived surface temperatures for Greenland in green and Antarctica in red, compared to the respective insolation at 65°N in June and 65°S in December. Source Renee Hannon.


Even accounting for changes in insolation (see right-hand scale), the swings in the Northern Hemisphere are much more dramatic than those in the Southern Hemisphere. Hannon explains that the difference in extremes is probably due to the Arctic Ocean being surrounded by land and the Southern Ocean surrounding Antarctica. There are also orbital and other natural characteristics, that affect our climate. All these influences are not incorporated into the insolation curves in Figure 2. They are explained well by Javier Vinós here and here.

CO2 records from Antarctica and Greenland ice cores do not match one another, which is odd because CO2 is normally considered to be a well-mixed gas. Greenland ice core CO2 measurements mostly run higher than Antarctic CO2 estimates and are more variable. Most researchers believe the Greenland measurements are contaminated and don’t use them; they rely only on the Antarctic measurements. This view may be flawed as Renee Hannon concludes here.

Further, as Hannon points out, Antarctic long-term temperature reconstructions correlate well with Antarctic CO2 ice core measurements, but Greenland temperatures have a negative correlation with Antarctic CO2 concentration. Greenland CO2 measurements correlate better with Greenland temperature reconstructions than the Antarctic CO2 record. Because CO2 is well-mixed, it should have a global effect on temperature over the long-term, or periods of more than a few years. Why does CO2 have a positive correlation with the South Pole temperatures and a negative correlation with North Pole temperatures? This would seem to preclude CO2 as a dominant climate influence, at least over the long-term. The negative correlation strongly suggests that temperature is driving CO2 concentrations, in both Antarctica and Greenland, and not the other way around.

Like polar climatic events, the Medieval Warm Period and Little Ice Age are not spatially or temporally synchronous everywhere (Soon, Baliunas, Idso, Idso, & Legates, 2003b). The problems with hemispheric and global averages become very apparent when we recognize that local weather and climate vary dramatically around the world. Climate and climate changes are local, not global, or even hemispheric.

Short-term variability is lost when mixing proxies of different temporal resolutions. Long-term variability is lost through standardization techniques meant to correct for non-climatic biases, but unfortunately, also remove climate extremes. Proxy bias, which can be due to a seasonal bias or other environmental bias, is systematic and not included in the confidence intervals provided with the reconstructions (Christiansen & Ljungqvist, 2011). The “confidence intervals” assume the errors are random and that short-term trends fluctuate around a mean that is close to the correct temperature. Systematic errors can move the long-term trend away from the true value, thus the name.

Global or hemispheric climatic changes, like the Little Ice Age or the Medieval Warm Period, only cause smoothed anomalies of 0.5 to 1.5°C, but local, yearly changes are much larger (Soon, Baliunas, Idso, Idso, & Legates, 2003b).

The final weakness of all hemispheric and global reconstructions is the sparseness of the data. The reconstructions used in Figure 1 are based on very few proxies. The Antarctic and Southern Hemisphere reconstructions only have three suitable proxies each. The tropics and Northern Hemisphere reconstructions are based on seven proxies each. The Arctic has nine. This coverage cannot be legitimately compared to the modern record, which has over 100,000 weather stations, with very precise readings (relative to proxies) that are precisely located and timed.

We need to stop this farcical attempt to compare modern instrumental temperatures over entire hemispheres or the globe to proxy-based temperatures for the past millennium or longer. Yet many pursue this Sisyphean task. It seems likely that the upcoming AR6 document will try and make direct comparisons of the modern era to the PETM (The Paleocene-Eocene Thermal Maximum), 56 million years ago!

Talk about temporal resolution problems. There are only seven CO2 estimates, that include some rate data, between 55 Ma (million years ago) and 56 Ma, as shown in Figure 3. Yet Philip Gingerich claims rates of “carbon” emissions today are 9-10 times higher than in the PETM. Seriously, how does he know that with only seven measurements in one-million years (Gingerich, 2019)? Gingerich’s “carbon” (he means CO2) accumulation rates are computed over very roughly estimated periods of 3,000 to 20,000 years, hardly relevant to today’s detailed record since about 1950.

Figure 3. Reconstructed CO2 and temperatures during the PETM. Data sources, CO2: (Beerling & Royer, 2011) and Denmark SST’s (Stokke, Jones, Tierney, Svensen, & Whiteside, 2020).

As noted in Figure 3, on average, CO2 was just a little higher than today during the PETM warming, and three estimates are lower than today. The blue line in Figure 3 is today’s average atmospheric concentration. During the PETM, sea surface temperatures (SST) near Denmark reached 33°C. But significantly higher CO2 concentrations were not reached until a million years later. They were also higher three million years earlier. How do you compare this period to today when we have daily measurements of both temperature and CO2? It is not possible.

Climate and climate change are regional, regardless of the cause. As already discussed, insolation affects the globe by latitude. Changing ocean currents, such as ENSO, and shifts in atmospheric phenomena like the ITCZ (Intertropical Convergence Zone) are regional. If CO2 dominates climate change, why aren’t the changes global?

Willie Soon, Sallie Baliunas, Craig Idso, Sherwood Idso, and David Legates were way ahead of their time when they published two critical papers in 2003 on this subject (Soon & Baliunas, 2003) and (Soon, Baliunas, Idso, Idso, & Legates, 2003b). They recognized the conceptual flaws in Mann, Bradley, and Hughes’ various proxy-based “hockey sticks” very early. Later the statistical techniques and the proxies used to generate the hockey stick were shown to be invalid by Steve McIntyre and Ross McKitrick (McIntyre & McKitrick, 2005), as immortalized by Andrew Montford in his monumental work, The Hockey Stick Illusion. We often focus on Mann’s hockey stick, but the same problems exist in all regression-based proxy temperature reconstructions, including Moberg’s, Marcott’s and many more (Moberg, Sonechkin, Holmgren, Datsenko, & Karlen, 2005) and (Marcott, Shakun, Clark, & Mix, 2013).

Ice core data for the past 2,000 years has an annual resolution since annual layers are typically recognizable in cores that young. Recent tree ring and coral record dates are also often accurate to the year. Historical records, such as the position of glaciers are sometimes accurate to the day. Proxies older than one to two thousand years or other types of proxies typically have dates that are much less accurate. The proxy temperatures estimated from tree rings, ice cores, and corals are affected by wind speed, elevation changes, cloud height, and other environmental factors. Large scale surface temperature reconstructions prior to 1600 AD have low confidence primarily due the lack of precise dates for most or all the proxy samples. Older proxy temperatures are also suspect due to the short length of the instrumental record used to calibrate them. It is recognized that the temperature-proxy relationship might change with time (National Research Council, 2006, pp. 19-21). The lack of early-time temperature calibration data causes the potential systematic error to increase the further we go into the past.

These sources of error, especially the dating errors, precludes combining most proxies into one temperature record. If the proxy temperature error were random and not systematic due to bias, one might consider combining them, even allowing for dating problems. But, even then, the resulting record would have such a coarse resolution that no warming rate could be computed comparable to modern instrumental warming rates. The idea that current warming rates or current temperatures are extreme relative to the past is without foundation (National Research Council, 2006, pp. 20-21).

Regarding the questions at the beginning of the post:
Is the global average temperature a useful climate metric? This metric only applies to global forces. CO2, whether man-made or natural, might be a global force, but we have yet to see any evidence that it is large enough to be detected or measured. The CO2 influence has been modeled, with unvalidated models, but you can do anything with a model if you don’t have to prove it works. See here for a deeper look at climate model problems. Natural climate influences are regional. Prior to 2005 for surface data, or 1979 for satellite data, global coverage was poor, leaving us with an exceedingly short, but accurate global temperature record. It is too short to detect global differences, due to CO2, as small as the IPCC models estimate today, roughly +3.4 W/m2 over the past 150 years (IPCC AR5, pp 817-818).

How do we compare today’s global or hemispheric temperature to the past? We don’t. Adequate data do not exist, either in the instrumental record or in proxies. The best solution is to compare individual proxies to locally measured modern temperatures.

Using the data we have; the world has warmed a paltry 0.8°C since 1950 and 0.3°C since 2005. It varied much more than that yesterday in Texas. There are good proxy records that go far into the past; and we have global instrumental coverage of surface and ocean temperatures today. Why try and make a global temperature from sparse and inaccurate proxies? Why not pick a proxy and compute a modern temperature for that location? I’ll show some examples in the next post.

Download the bibliography here.

This article appeared on the Watts Up With That? website at https://wattsupwiththat.com/2021/06/22/global-warming-is-happening-what-does-it-mean/

]]>

Subscribe to Our Informative Weekly Newsletter Here:

  • This field is for validation purposes and should be left unchanged.