What is the role, if any, of philosophy of science during this pandemic and global lockdown?...So far in the lockdown—May whatever-it-is-today—I am not aware of a systematic piece written by a philosopher of science on the pandemic or the policy response...What’s keeping me up at night, though, is this: should philosophers of science be trying to assess the merits of the various scientific arguments pertaining to SARS-CoV-2 that are now having such profound implications on policy, in a rigorous yet publicly visible manner, at a pace which accords with that of the relevant scientific work? This has been a moment of what we might call ‘fast science’. Ought philosophy of science attempt to engage with, contribute to, and criticize fast science, as it unfolds?
Both the harms (broadly construed) of SARS-CoV-2 and the effectiveness and unintended consequences of the social policies meant to mitigate those harms have been underdetermined by available evidence. At issue are the plurality of epidemiological models generating discordant predictions, the reliability of the various models, the empirical substantiation of model parameters, the evidential basis of the effectiveness of many lockdown strategies, and a complicated intertwining of social values and science. Moreover, stances on the pandemic are, obviously, politicized. The pandemic and global policy response seems, therefore, apt for analysis from the perspective of philosophy of science.
...The relevant scientific issues require technical expertise in epidemiology and economics, and up-to-date knowledge of a rapidly shifting epistemic landscape. A meta-level operative principle for any commentator on the pandemic ought to be epistemic humility, especially given the secondary pandemic of over-confidence among policymakers, armchair epidemiologists, and the scientists themselves. Moreover, because positions on the pandemic are often politicized, a philosopher of science who defends the Swedish approach (for example) might be assumed to have values and motivations aligned with, say, Trump supporters.
That said, our colleagues in philosophy of science include experts on scientific models, on methodological problems in medical science, on causal inference, on the relationship between science and values, on inductive risk, and on many other issues pertinent to the pandemic and policy response. Obviously, the stakes are high: some models are telling us that millions of people might die from the virus, while other models are telling us that millions of children might starve as a result of the global lockdowns. Should we be using our expertise to assess these scientific arguments as they unfold? There is a temporal dimension to my question....I am asking if we should be assessing the particular details and predictions of these models, now, and how they are being deployed in policy, today, in a manner that, to the extent possible, reflects the manner of presentation of the pertinent scientific arguments (namely, the speed of their articulation, their visibility, and their impact). Should philosophy of science engage with fast science?
...
So much science having so much impact, yet philosophers of science have been relatively quiet.....
In an email, Nancy Cartwright agreed that philosophers can and should weigh in on serious scientific and policy issues, though she noted that her own skills are at a meta-level rather than assessing particular data and models—I reckon that this is a diagnosis for many of us. Jay Odenbaugh raised a number of such meta-level questions about models that are especially salient in the present context and that are worth quoting in full:
When should policymakers ignore models and when should they use them for policy guidance? What are good rules of thumb for critically thinking about models for laypeople? Do we need models for policy or can we simply get by with observational studies? How should we determine who is a good or bad modeller? How do epidemiological models compare to other types of models in other domains?
Although persuasive answers to these questions might require the sort of slow, meta-level scholarship that our discipline is accustomed to, such answers might better prepare our discipline for more rapid and impactful engagement in future episodes of fast science, which could emerge from all sorts of phenomena, such as a climate crisis, advances in artificial intelligence, or indeed, another virus.--Jacob Stegenga "Fast Science and the Philosophy of Science"at Auxiliary Hypothesis BJPS & Dailynous
Today is the third in a series of posts (see the first here inspired by Phillipe Lempoine; and the second here).
In his essay quoted above, which summarizes and expands on a lively facebook discussion among philosophers of science, Stegenga, distinguishes (implicitly) between fast science and slow science. He implies that slow science is the norm and fast science is the exception due to the (exceptional) "current crisis." He does not define either, but let's assume we understand what he means and accept the distinction. It is worth noting that some of the reasons given by philosophers of science why they might not have anything to contribute to the public debates over unfolding fast science have nothing to do with the fact that it is 'fast', but rather that it is policy apt. So, within the philosophy of science there is a further divide if not in kind then at least in degree between, let's call it, pure science and policy apt science (not to be confused with 'applied' science).
Much of philosophy of science and the ruling norms within it is still shaped by the dominance of physics, math, and logic in the philosophical imagination. Of course, biology and cognitive science are huge fields in philosophy of science, but when these are studied by philosophers of science, they are not, in the first, instance treated as policy apt. Yes, philosophers of science who work on medicine, economics, engineering, and (say) decision theory, are much more aware that the sciences they study are themselves organized around epistemic norms pertaining to application and practices of justification that presuppose uptake. Even so, I cannot tell you how often, say, decision theorists will balk at the thought that how the formal apparatus is regularly applied is a legitimate form of criticism against the epistemic status of the machinery. (That's 'misuse' and so irrelevant, etc.) I mention this to suggest that thinking about uptake does not come naturally to bread and butter philosophers of science who often pay lip-service to this fact in grant applications and social media presence, but don't internalize it in their work and mutual evaluation.
Of course, the previous sentence is an exaggeration. Thanks to Heather Douglas there is a lively discussion about inductive risk in philosophy of science, and one can now assume knowledge of the concerns and arguments of feminist philosophers of science, which was not common when I was a PhD student a generation or so. Even so, it is no surprise that Eric Winsberg (who has done important work in climate science) and Alex Broadbent (who has done important work on epidemiology) have been so visible as philosophers of science. They have cut their teeth thinking, in part, about the complexities of science in public policy.* Okay, with that in place I want to offer four mutually supporting, but distinct claims.
First, the question philosophers of science should be asking is [A] how should we think about the intersection between fast and policy apt science in a crisis? It strikes me what's clearly needed is real philosophical work on that. One way to assimilate that question to familiar philosophy of science is to think about that intersection in terms of the combination of scientific norms of controversy and inductive risk (or what in my own work I call 'responsible speech'). It is well understood since Kuhn, that the range of disagreement is constrained in scientific practice. In many ways fast, policy apt science during a crisis has features that are related to the turbulence characteristic of a possible paradigm shift and the intense disagreement familiar from a pre-paradigmatic stage. But in a crisis, these very familiar features play out with possible uptake, mediated by what Polanyi calls 'influentials' and I call 'aggregators,' in policy.
Second, in a policy apt context one can never assume what one may call a free market in ideas, or perfect communicative rationality, which is nearly always tacitly assumed by philosophers of science when discussing scientific controversy. By this I am not pointing to the barriers of entry of who counts as a relevant expert or who has access to the expensive machines. But rather that in in policy apt contexts two features tend to be structurally present: (i) some of the experts are not free to judge or free to say what they think even though they have highly salient expertise. Government scientists may well have gag orders of various kinds. This need not always be sinister--for coordination purposes, it may well be very important that government has a clear message. This is why, for example, Eric Winsberg and I noted that one should be alert to manufactured consensus. (Of course, philosophers of science should model under what conditions a unified flawed message is to be preferred over more noisy signal!) (ii) Governments always have an incentive to withhold inconvenient facts or to nudge scientists in a certain direction. So, what we really should be asking is [B] how should we think about the intersection between fast and policy apt science in a crisis with imperfect communicative rationality.
Third, the way the modern academy is organized is to incentivize and reward hyper-specialization (that is, creating barriers of entry of various kinds). In one sense philosophy of science is no exception: our work often maps on to particular sub-disciplines in special sciences. But simultaneously, many sciences draw on generic methods and practices common to many sciences. In virtue of the foundational questions we are often free to pursue, philosophers of science often build up non-domain specific expertise about the strengths and weaknesses and features of these generic methods that can surpasse experts within a special science. Of course, such generic knowledge is not limited to philosophers (e.g., statisticians, computer scientists, etc.) But it's the presence of such domain invariant expertise about generic instruments of science that I have claimed that synthetic philosophy is possible.
Of course, what is characteristic of the present crisis, is that we see the intermingling not just of fast and policy apt sciences, but also many policy questions that have not yet been so defined that they are what I call constrained choice. That is (to simplify) that the relevant issues have been internalized in one science and can be formulated as one kind of policy decision. Rather, the crisis is characterized by evolving recognition that many different kinds of sciences, many different kinds of policies, can have many complex social effects (psychological well-being, rates of hospitalization on incidence of non Covid-19 diseases, supply-chain limitations, public transit and social distancing, etc., etc.). I have suggested that this requires something I have called 'policy apt synthetic philosophy.' So, the question we should ask is [c] how should policy apt synthetic philosophers of science think about the intersection among many fast and policy apt science in a crisis with imperfect communicative rationality.
Fourth, policy apt synthetic philosophy is now housed in attenuated fashion in departments of scientific policy/advice that help coordinate decisions that involve multiple sciences and policy domains. There is a need for a class of individuals that study versions of [A; B; & C] alongside the principles of such policy coordination that are not government officials so that they can contribute alongside the scientific aggregators to public discussion in independent fashion. But what this also points to is the need for a more collaborative ethic within philosophy [of science] along the model of what I once dubbed (recall) integrated PPE (Philosophy, Politics, Economics).+ This last point suggests that some of the infrastructure for policy apt synthetic philosophy is already developing, but let me stop here, today.
*To put this autobiographically. Around 2005-6, I read Carl Craver's Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience in draft. And I remember how eye-opening it was to think of scientific practice of even fundamental or pure science in terms of medical practice.
+If that is right we should see disproportionate co-authorship in the COVID-19 bibliography of writings by philosophers.
I love this series of posts, Eric! One thought on your second of four theses: "policy apt context one can never assume what one may call a free market in ideas, or perfect communicative rationality, which is nearly always tacitly assumed by philosophers of science when discussing scientific controversy... (i) some of the experts are not free to judge or free to say what they think even though they have highly salient expertise. Government scientists may well have gag orders of various kinds... (ii) Governments always have an incentive to withhold inconvenient facts or to nudge scientists in a certain direction"--While I agree with the other three claims, this doesn't seem so much of an issue to me.
The way I see policy-makers most often appealing to arguments from academics is picking and choosing--in most policy-apt contexts, there are a variety of academics making various claims and, when they are apprised of arguments that support pre-established policies, the policy-makers cite those arguments. I think this is the usual way that policy-makers use work by academics. In this context, it is not an issue for policy-makers to manipulate (in the social choice sense) the academics' work; the work is already there. Let us call this the 'pick-and-choose' model.
However, there is a considerably smaller number of cases where the arguments change minds. I take it that the early piece by Neil Ferguson et al. changed minds in Whitehall and the White House. However, once again this is not a place for policy-makers to manipulate the work---they are responding to information they didn't have and are changing their policy preferences. Call this the 'shock-and-awe' model.
There might be some cases where the government is manipulating scientific discussion, but I don't think it fits in either of these models and, to be honest, I think these models constitute the vast majority of interaction between governments and policy-apt academic work.
Posted by: Kian Mintz-Woo | 05/20/2020 at 02:05 AM
Hi Kian,
Thank you for your comments. Let me first focus on a point of agreement: the demand side matters. And, second, this indeed entails that often science functions as legitimation for pre-existing policy in your pick-and-choose sense. (I use legitimation because I think your focus on 'arguments' is not quite right.) These cases are prevalent, especially, when outcome patterns are already politicized (or where the incentives/consequences are well understood by political agents).
Third, even in such cases one can't assume a free market in ideas, because political agents will also help shape scientific structure (through grants and appointments). And it matters whether the scientists are bureaucrats, academics, in industry, etc.
Fourth, finally, in contexts of genuine uncertainty (like our pandemic), policy agents are also unclear about consequences; and so pick-and-choose is not as attractive. Different kind of aggregators will help shape policy and public debate.
Posted by: Eric Schliesser | 05/20/2020 at 10:49 AM