How we fool ourselves. Part II: Scientific consensus building
by Judith Curry
“Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. – policy scientist Daniel Sarewitz
Statistician Regina Nuzzo summarizes the problem:
“This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. In today’s environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept ‘reasonable’ outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.”
Psychologists Richard Simmons et al. find that researcher bias can have a profound influence on the outcome of a study. Such ‘researcher degrees of freedom’ include choices about which variables to include, which data to include, which comparisons to make, and which analysis methods to use. Each of these choices may be reasonable, but when added together they allow for researchers to extract statistical significance or other meaningful information out of almost any data set. Researchers making necessary choices about data collection and analysis believe that they are making the correct, or at least reasonable, choices. But their bias will influence those choices in ways that researchers may not be aware of. Further, researchers may simply be using the techniques that work – meaning they give the results the researcher wants.
The objective of scientific research is to find out what is really true, not just verify our biases. If a community of scientists has a diversity of perspectives and different biases, then the checks and balances in the scientific process including peer review will eventually counter the biases of individuals. Sometimes this is true—but often this does not happen quickly or smoothly. Not only can poor data and wrong ideas survive, but good ideas can be suppressed.
However, when biases caused by motivated reasoning and career pressures become entrenched in the institutions that support science – the professional societies, scientific journals, universities and funding agencies – then that subfield of science may be led astray for decades.
Biases caused by a consensus building process
Consensus is viewed as a proxy for truth in many discussions of science. A consensus formed by the independent and free deliberations of many is a strong indicator of truth. However, a consensus can only be trusted to the extent that individuals are free to disagree with it.
A scientific argument can evolve prematurely into a ruling theory if cultural forces are sufficiently strong and aligned in the same direction. Premature theories enforced by an explicit consensus building process harm scientific progress because of the questions that don’t get asked and the investigations that aren’t undertaken. Nuzzio (2015) refers to this as ‘hypothesis myopia.’
If the objective of scientific research is to obtain truth and avoid error, how might a consensus seeking process introduce bias into the science and increase the chances for error?
‘Confirmation bias’ is a well-known psychological principle that connotes the seeking or interpretation of evidence in ways that are partial to existing beliefs, expectations, or an existing hypothesis. Confirmation bias usually refers to unwitting selectivity in the acquisition and interpretation of evidence.
Philosopher Thomas Kelly (2005) provides the following insight into confirmation bias. As more and more peers weigh in on a given issue, the proportion of the total evidence which consists of higher order psychological evidence of what other people believe increases, and the proportion of the total evidence which consists of first order evidence decreases. Kelly concludes that over time, this invisible hand process tends to bestow a certain competitive advantage to our prior beliefs with respect to confirmation and disconfirmation.
Allen et al. (2020) demonstrate how dependence, pressure, and polarization can force a consensus, making reliance on consensus as an indicator of truth unreliable. As a result, a consensus can only be trusted to the extent that individuals are free to disagree with it, without repression or reprisal. Similarly, when strong incentives favor affirmation of a position, a consensus affirming it becomes almost inevitable, and therefore all but meaningless.
Communication theorist Jean Goodwin argues that once the consensus claim was made, scientists involved in the ongoing IPCC process had reasons not just to consider the scientific evidence, but also to consider the possible effect of their statements on their ability to defend the consensus claim.
The IPCC’s consensus-building process arguably promotes groupthink. ‘Groupthink’ is a pattern of thought characterized by self-deception, forced manufacture of consent, and conformity to group values. Janis (1972) describes eight symptoms of groupthink:
- illusion of invulnerability
- collective rationalization
- belief in inherent morality
- stereotyped views of out-groups
- direct pressure on dissenters
- illusion of unanimity
- self-appointed mind guards
Many defenders of the IPCC consensus − both scientists and consensus entrepreneurs − show many if not all of these symptoms.
Thomas Gold (1989) discussed the dangers that ‘herd behavior’ poses for scientists, potentially leading to an inertia-driven persistence of false consensus opinion within the sciences. While herd instinct has value in sociological behavior, it has been a disaster in science − in science what we generally want is diversity. When people pursue the same avenue all together, they tend to shut out other avenues, and they are not always on the right ones.
It is not just the herd instinct in the individuals that is of concern. If support from peers and moral and financial consequences are at stake, then staying with the herd is the successful policy for the individual; however, it is not the successful policy for the pursuit of science. Mental herd behavior, even if it does not actually put a clamp upon free thinking, insidiously applies pressure to follow the fashion. The institutions that support of science − financial support, the journals, the judgment of referees, the invitations to conferences, professional recognition − are all influenced by herd behavior.
Economist William Butos (2015) characterizes the IPCC as a ‘Big Player’ in science in that it possesses all of the attributes characteristic of Big Players in markets: bigness in terms of influence, insensitivity to the usual constraints, and discretion in its ability to promote a favored direction of research. This characterization of the IPCC as ‘Big Player’ is similar to economist Richard Tol’s characterization of the IPCC as a knowledge monopoly. The IPCC’s influence in climate science is pervasive, allowing it to largely ignore the usual scientific constraints on the acceptance of hypotheses. Professional success in climate science has become more tied to the acceptance of the IPCC’s pronouncements than with the exploration of contrary possibilities.
The existence of the IPCC as a ‘big player’ and a ‘knowledge monopoly’ on climate change can lead to premature canonization of IPCC conclusions. Premature canonization refers to widespread scientific belief in a false or incomplete conclusion, which leads to suppression masquerading as rejection. Suppression occurs when the fear of social sanctions prevents ideas from being explored or empirical findings from being presented in scientific or public forums. In science, rejection occurs when an idea has been explored and the evidence has been found wanting. A classic, relatively recent case of premature canonization involves the scientific identification of causes of ulcers.
So what are the implications of these concerns for the IPCC’s consensus on human-caused climate change? Cognitive biases in the context of an institutionalized consensus building process have arguably resulted in the consensus becoming increasingly confirmed, and even canonized, in a self-reinforcing way. An extended group of scientists derive their confidence in the consensus in a second-hand manner from the institutional authority of the IPCC and the emphatic nature in which the consensus is portrayed. This ‘invisible hand’ marginalizes skeptical perspectives. Overconfident assertions by the ‘Big Player’ take away the motivation for scientists to challenge the consensus, particularly when they can expect to be called a ‘denier’ for their efforts and see their chances diminish for professional recognition and research funding.
The consensus building process acts to amplify personal biases, and marginalizes disagreement from either a majority opinion or the opinion of the loudest or most motivated person in the room. One can only speculate on the magnitude and importance of the biases introduced into climate science by the IPCC’s consensus seeking process.
This article appeared on the Climate Etc. website at https://judithcurry.com/2021/04/10/how-we-fool-ourselves-part-ii-scientific-consensus-building/#more-27378]]>