Some of the most exciting philosophy in the 21st century has been done with an eye towards philosophically significant developments in science. Social psychology has been a reliable source of insights: consider only how much ink has been spilled on situationism and virtue ethics or on Greene’s dual-process model of moral judgment and deontology.
A few years ago (2014), Merel Lefevere and I coined “The-Everybody-Did-It” (TEDI) Syndrome, to discuss symptoms of collective negligence in science and other epistemic communities. We were, of course, not the first to identify group-think nor the first to note that an appeal to TEDI was used by participating individuals as kind of blanket get out of jail card (viz. Machery's claim it would be "ill-advised to blame philosophers"). I mention the date on that paper because it was published before the first study of the The Reproducibility Project was published. In our paper, we draw, en passant, on (1993) Feigenbaum and Levy to alert the reader that different scientific fields have very diverging practices when it comes to replicating results or sharing data and that there are well-known incentives and other formal and informal barriers against publishing replications or dis-conformations.* Our main epistemic point was that one cannot assume that scientific fields are perfectly functioning communicative communities (with a perfect market in ideas).
This point was not a sudden revelation for me. In my very first (2005) article in the philosophy of economics a decade earlier, I drew on work by Deirdre McCloskey, who had been warning against the cult of statistical significance for a long time, to point out that often, in the absence of robust background theory, the use of statistical significance created the illusion of scientific rigor. I also argued that much of social science, including elite reaches of economics, had a confirmatory bias rather than (with a nod to Popper) what I called a 'stress-testing' attitude.**
In his piece, Machery simultaneously presents himself (and other philosophers) as victims ("misled by scientists’ marketing"), as naïve ("too credulous"), and wishful thinking ("motivational bias") amongst other ills. While I applaud his willingness to take stock and derive lessons from the past, I would argue, by contrast, that Machery and his peers could have done better if he hadn't viewed himself as a "consumer" of science, but as a responsible [since we're in the realm of economic metaphors] co-producer of knowledge. My interest is not to score-keep, but to ensure that we learn the right lessons (some of which Machery also advocates). What do I have in mind?
The consumer model posits a deferential asymmetry between the expert specialist scientist and the philosopher. That's not altogether surprising because the expert is (ahh) an expert whereas the philosopher is not. And so lurking in Machery's narrative is a strict cognitive division of labor in which all philosophers need to do -- when they want to use some bit of social science for their own end -- is sample (say) abstracts and conclusions and then choose the right one that will satisfy their needs. The consumer on this (tacit) model is not or barely responsible for the quality control of the science she selects.
And that's because in the consumer model there is generally a commitment to the proposition that scientific communities have what we might call an efficient market in ideas. And this proposition tends to entail that quality control within science is thorough and rapidly arbitrages away any serious epistemic problems. Machery recognizes that this can't be quite right ("the frontier of science is replete with unreplicable results"), but his main solution (in the post), is to be suspicious of simple solutions for social ills, while undoubted salutary, leaves the consumer model in place.
Now, the previous paragraphs are a bit of caricature (not the least about Machery who has the skillset to engage on quite technical details of science), but treat the consumer model as a provisional ideal type. Here's another norm of science associated with a different ideal type (it's associated with Michael Polanyi's ideas about the republic of science and the role of adjoining disciplines in maintaining quality control in the whole chain of disciplines): when you use science in non-trivial fashion for your own epistemic ends, you re-do the calculations, learn to use the relevant models, and where possible check the evidence and arguments, etc.+ Obviously, that falls short of replicating experiments and being authoritative in the field. (But someone like Hasok Chang may well suggest, go replicate!) If philosophers had done their own forensic job in the uptake of (say) social psychology, the fragility of much of the work would have been transparent to them (or at least noticeable) and could have played a fruitful role in their discussions (and arguably in social psychology). Yes, that's very easy to say after the fact, but as I noted above I was saying some such stuff before the fact.***
Obviously, the kind of work I mention in the previous paragraph is time-consuming and also transforms the nature of expertise of those who use social science for their own ends. On this model philosophers who want to deploy social science for their own ends move from being consumers, who rely on external markers of quality control, to being co-responsible for the 'scientific' supply chain. Of course, there are some epistemic short-cuts (say by teaming up with others, getting advanced degrees in an adjoining field) each with their own risk and rewards, but there are no magic bullets.
Obviously, there are fields (e.g., climate science, bits of neuroscience, epidemiology, etc.) where the underlying science draws itself on so many kinds of expertise and different kinds of evidence, that a lone philosopher is in a very bad position to do any forensic work. In my view, in such areas to be a responsible co-producer of knowledge requires joining either a research team, an interdisciplinary center with active research meetings, or to cultivate a wide area of diverging scientific 'informants' so that you familiarize yourself with the working discussions of such teams when they evaluate the evidence that the composite parts create and how they evaluate claims of other teams (etc.).
One way to mitigate the possibility of TEDI, Lefevere and I argue, is to make sure that in epistemic communities dissenting voices are heard and credited. (We think that facilitating this, even seeking out and amplifying critics, is a special role of aggregators [Polanyi calls these (recall) 'influentials']--prominent figures in the field that also interface with other fields and policy areas.) So, I am pleased to learn that Machery agrees (although a bit sad he did not credit us, or Polanyi!) As Machery notes, another way to mitigate the possibility of TEDI, is to seek out directly critics of one's views.
In a zero-sum (funding and jobs) contest it's not so easy to ensure that the dissenters are heard. I mentioned the Feigenbaum and Levy paper because I also learned (from Levy) that back in the day, they had trouble obtaining grants for follow up research. Unsurprisingly they switched research focus.
Often nay-sayers and critics are shut up by demanding that they offer an alternative theory or alternative approach to solve some problem. (If one is shaped by Kuhnian ideas or cost-benefit analysis ignoring critics may well be thought rational.) This norm asymmetrically rewards the most confident voices. But the burden of proof ought to be on the producers of knowledge not on the critics.
But if we philosophers are also knowledge producers, as I claim we are when we use science for our own ends, then we, too, must be willing to listen and seek out the critics of particular sciences within philosophy. In fact, lots of philosophy of special sciences house folk with critical attitudes toward reigning orthodoxy in a field they study without in any sense being anti-scientific (here are some examples, feel free to add your own): think of Stegenga's work on medical evidence, Femke Truijens (a former student) on clinical validity of clinical research in therapy, Jay Oldenburg and Anna Alexandrova on robustness analysis, Joel Katzav (and his colleagues) on the role of probabilistic assessment in climate science, and a whole range of philosophers of physics who helped develop challenges to 'Copenhagen.'
Even so, I suspects lots of professional philosophers worry that criticisms of particular ruling orthodoxy in science, slides into a kind of anti-science (familiar of now relatively dormant strains in continental/STS or in America's religious right), or might be co-opted by politically noxious forces. As Eric Winsberg, Neil Levy, and I warned, we saw this play out in the recent covid crisis.
In fact, riding on the coat-tails of hyped new science to tenure has been a rewarding rite of passage within analytic philosophy for a long time now. (No, I am not going to name names.) As Machery notes, it is kind of constitutive of "some of the most exciting philosophy" to do so.
We need to have more tools to think about the forensic role of philosophers as responsible co-producers of knowledge (when we are doing so), and the inductive risks involved. In my experience philosophers tend to resist wanting to internalize the inductive risk of our own work because we tend to model/understand ourselves as pure truth-seekers. The moment you critically introduce the incentive structure of the scientists or of the philosophers deploying that science, you are quickly accused of 'doing sociology' and somehow ignoring evidence or epistemic issues (it's an interesting fact that within philosophy and economics 'sociology' can still be used as a pejorative). About that some other time more (or just google 'methodological analytic egalitarianism'). But if we leave the consumer model in place, and the incentives for how we evaluate some of the most exciting philosophy, it is a means of making future generations of philosophers as credulous as ours. Perhaps, that's the human condition, but it would be nice if we could try to do better.
*Feigenbaum, S., and D.M. Levy. 1993. The market for (ir)reproducible econometrics. Accountability in Research 3(1): 25–43.1993. They were not alone in trying to alert the world to such problem: here's a twitter thread I did with a list of early papers on the replication crisis and problems with statistical significance.
**Both of these papers have had negligible impact. I also had considerable trouble publishing follow up work, and basically decided to blog about my ideas rather than persist with the gate-keepers and moved on to other interests.
***I wasn't saying it about tacit bias research. So, I don't blame those folk for not paying to me.:)
+ I learned to appreciate the point from George Smith when we studied the evidential arguments of the Huygens-Newton debate on universal gravity, George redid all their calculations. And that way we discovered that the standard narrative about the debate was highly misleading.
Comments
You can follow this conversation by subscribing to the comment feed for this post.