An image of science (recall these posts on Hume, Williamson, Spinoza) contains a list of characteristics that function as short-hand for representing science when these characteristics are used in debates where one side (or more) appeals to the (epistemic) authority of science to settle debate. Images of science can circulate within the sciences, in public discussion, and even in philosophy. Here are two dogmas in the contemporary image of science:
(1) While science may be a complex network of collaboration and competition, it also involves a self-correcting and free exchange of ideas.
(2) Consensus is the natural effect of a proper functioning mature science.
Within philosophy, these two dogmas are familiar from Thomas Kuhn's work. Within economics and formal decision theory the dogmas are linked (at least since Aumann (1976), where (1) helps generate (2)).
In some contexts such consensus within science is a species of higher order evidence and generally a cue to defer to the science. And this reinforces how science can settle debate, and thereby open space for public action (assuming that prior to action there is public debate). Lurking in the previous two sentences is the idea that science can provide (some) authority to public policy based on it. So it is no surprise that governments promote institutions that help discover or articulate such consensus. In practice, governments and science, while distinct from each other (and sometimes at odds), frequently lend each other (some) authority by presupposing and reinforcing the two dogmas.
So far so good.
But because settled science is so important to authoritative action [and consensus such an important proxy of it], this also creates incentives to generate or organize the appearance of consensus. And there is conceptual space for such organized consensus. For even if one grants that consensus is the natural effect of a proper functioning mature science, this is compatible with consensus also being the effect of malfunctioning science. Let's call the agreement that is the product of malfunctioning science 'ersatz consensus.' (And I'll use 'real consensus' when I distinguish a naturally occurring consensus from the ersatz type.)
It's worth noting that, despite the way I have phrased it in previous paragraph, participants in malfunctioning science need not have the intent to produce an ersatz consensus. By this I mean two things: first, some strategic agents may well intentionally wish to promote deliberate confusion such that there does not seem consensus at all (think of the tobacco industry on the relationship between smoking and cancer; or the energy industry and man-made global warming, etc.). Second, sometimes an ersatz consensus is the product of the absence of proper exchange of ideas in science (because of bad replication or publication practices, or because of temporary embargoes on circulation of ideas [not uncommon in medical and defense sciences]). So, sometimes ersatz consensus is the effect of top-down organization, and sometimes it's the effect of bottom-up failures of process.
It's not surprising that for outsiders it's incredibly difficult to distinguish between an ersatz consensus and a real consensus, especially if the science is incredibly esoteric and/or interdisciplinary. But sometimes it is very difficult for insiders to know whether the consensus view is real or ersatz in their own science. There need not be anything nefarious about this; all that's needed is that because of say extensive cognitive division of labor even insiders may be unaware (of the nature and extent of) that the process that shapes the exchange of ideas within their own science is malfunctioning.
Even though I have well known skeptical leanings, I don't want to exaggerate the claim in the previous two sentences. There are forensic techniques that provide signals about such matters. In addition folks in adjoining sciences may have the skill set to notice communicative problems I am hinting at. (In a way that's the story of how the replication crisis got tackled.) But it may take time to sort such things out.
Notice that so far I have not touched the two dogmas of the contemporary image of the science (although I have hinted that (1) may be an idealization). What I have done is extrapolate some of their mutual entailments.
As an aside, even without governments nudging us to accept a scientific consensus is authoritative, (2) has very deep roots in the intellectual culture. The Stoic sages and the Spinozistic wise are said to agree, and I am sure you can list a whole number of other examples. But -- pace Kuhn -- it is worth noting that there were scientific periods where within science persistent disagreement over many important issues was not taken to be embarrassing (I think this was common view of late nineteenth century physics if you go look at Poincare, Duhem, Hertz, Mach, etc.).
For, it is worth noting however that (2) is not self-evident in all contexts. Sometimes different scientific research groups within the same scientific community rely on different auxiliary assumptions or computational shortcuts to generate (say) agreed upon predictions (go check out real-life examples of Quine-Duhem thesis). So even though at one level of description there may be consensus, at another level of description (including the auxiliaries) there is not. I suspect this situation is endemic in a lot of sciences. (Again, over time this may well be sorted out if the auxiliaries start to matter for some reason.)
In addition, in a very neat recent paper, Kevin Dorst has argued (on the Bayesian terrain of Aumann) -- in the context of trying to provide a mechanism for group polarization -- that if evidence is ambiguous we should not expect consensus. (He has a neat scientific example, so go read it.) My view is that in sciences with incredibly robust and mathematically refined background theories (and instruments) such ambiguity of evidence may well be rare, but that in many sciences ambiguity of evidence and, thus, lack of consensus is to be expected and rational.
Let me stop here for now.
My first reaction it's that this skips the problem of reductionism, particularly in the kind of govt science-for-policy settings to which you appeal. A student of mine studied GEs PCBs in the Hudson and the fight over cleanup. Different (honest, proper) sciences asking different questions because they were seen to imply different policy programs tied to different interest groups' preferred responses were introduced. Agencies were set up to listen to only one variety of research out research program. My sense is that "scientific" consensus, real or ersatz, odd often a result of this kind of situatedness at least as much as it is in "science" (however that is being conceived) itself. Here, from Latour to Haraway, Star to Hacking, networkings, situatedness, conventions and boundary practices impinge, I think. These issues are all over economic entomology, ecological research, soil science, crop breeding, and engineering. A difference might be claimed between basic and applied sciences but my sense is that the line between them has been pretty well erased.
Posted by: Alan P Rudy | 09/05/2022 at 07:03 PM
Given the spirit of contrite fallibilism, if there is uncertainty about a particular question, then sometimes the right approach will be to have competing theories and research programs (not that kind!) where any scientist will have a definite favourite but be aware they may well be wrong. And on other occasions, there will be only one way forward (an experiment or a theory), but still there will be an awareness that this may fail.
So, in human genetic epidemiology, prior to the rise of large genome-wide association studies of common diseases, we carried out many expensive genetic linkage studies even though it was possible (and turned out to be the case) that most of the actual individual gene effects we needed to study were too small to be detected by those experiments. This is an empirical tradeoff between cost and statistical power of an experiment which may be keenly felt in the applied sciences Alan Rudy alludes to above. Another example: the scientific papers published by epidemiologists usually cost 4-5 fold more per minimum-publishable-unit than those performed in wet labs.
Posted by: David Duffy | 09/06/2022 at 06:57 AM
"usually cost 4-5 fold more" - which reminded me of the joke that ends "they don't even need wastebaskets!"
Posted by: David Duffy | 09/06/2022 at 07:03 AM
Don't know much about fallibility or reductionism. But, if one characterizes science after an image, then this only accounts for the second part of Nagel's cryptic account of reality: '...how things might possibly be.' Image is the root word for imagination. Which leaves more than enough room for distortion of facts. The backstory here is crucial. When characterizations are made, this suggests interests, preferences and motivations. Such propositional attitudes do little for accuracy or clarity and much towards fallibility.
Posted by: Paul D. Van Pelt | 09/06/2022 at 02:20 PM