In a new paper – Diversity in Epistemic Communities – we use evolutionary game theoretic models to argue that even in the absence of stereotype threat, explicit bias, or implicit bias, underrepresented groups in academia can end up disadvantaged by the dynamics of social interaction. This can happen whenever actors 1) condition their behavior on types and 2) learn to act in their own best interest. As we argue in the paper, the real devil here is type-conditioning. As long as it occurs in a population, you can get these sorts of disadvantaged outcomes.
Perhaps the most striking thing about our results is that this disadvantage can arise in a group of well-meaning agents, who simply learn over time to benefit themselves. It is not clear that these agents need even be aware of the fact that they are learning to behave in ways consistent with discrimination and bias. In the rest of this post, we’ll explain how these social dynamical effects can occur.
Imagine an academic community with two types of researchers – men and women, say, or white researchers and researchers of color. Further suppose that one of these types is in the minority. (This is not a stretch. According to Norlock (2006), women account for only about 20% of tenured philosophers. Botts et al. (2014) calculate the proportion of black philosophers in the US to be 1.3%.)
Now imagine that these academics are engaged in various strategic interactions in their day-to-day lives. By strategic, we mean interactions where actors care about what their partners do. This also is not a stretch. It may not seem immediately obvious, but academics are constantly engaged in strategic behavior. Three sorts of strategic situations that we think are deeply important to academics are bargaining, cooperation, and collaboration.
To clarify this point, we’ll describe scenarios where academics do all three of these things. Bargaining sometimes occurs explicitly in academia, as when academics bargain for salaries and benefits, or for grant funding. Far more often, it occurs implicitly to divide joint work. Every time academics plan a conference together, advise students, organize a department, run a journal, etc. it must be decided who does what. If you assume (reasonably) that everyone prefers more research time (or more free time), there is a sense in which such agreements often have winners and losers.
Cooperation, likewise, is a central part of academia, often in the same sorts of situations just described – conference planning, editing volumes, etc. Group work of these sorts can be beneficial compared to individual work (imagine running a large conference alone). At the same time it may be risky in that a bad cooperative partner might fail to do her agreed upon job.
Lastly, many academics engage in research collaborations. These engagements often have a somewhat complicated strategic structure. Usually academics have to first pick collaborative partners (or choose not to have collaborative partners at all). Then once partners have been chosen and a collaborative research project has begun, academics have to bargain, implicitly or explicitly, to decide two sorts of things. First, who will do what work for the group project. And second, who will be best positioned, in terms of author order, to receive rewards for the project.
In our paper, we analyze these scenarios by modeling them as games. A game, in the game theoretic sense, is a simple representation of a strategic interaction that specifies 1) who does the interacting, 2) what sorts of things they can do, and 3) what the outcomes are to the actors for various sets of choices. The games we use to represent bargaining, cooperation, and collaboration in academic settings are the Nash demand game (or ‘divide-the-dollar’), the stag hunt, and the collaboration game respectively. We won’t describe these games in detail here, though in the paper we introduce them from first principles.
How do individuals behave in these strategic scenarios? Imagine the researchers in our academic community learn, over time, to repeat choices that benefit them and refrain from making choices that do not. And suppose that the more beneficial a choice, the more likely it is an academic learns to make it. Empirical work indicates that this is a reasonable (if highly idealized) description of how people learn to change their behaviors over time.
There is one last crucial element to our models, and this is that our researchers can condition their behavior on the type of partner they interact with. In other words, each person has the choice to treat men and women, or black and white, academics differently. This, again, mimics behavior seen in real academic populations. To give just a few examples, when presented with identical CVs, researchers are more likely to hire male candidates, and more likely to offer them more money (Moss-Racusin et al. (2012)). Black applicants are less likely to receive NIH funding (Ginther et al. (2011)). (And the list goes on.)
What happens when all this comes together? The first thing to note is that in such models, actors often learn what can be thought of as discriminatory behaviors against researchers of the other type. For example, it is common for actors to make fair bargaining demands from those like themselves, but extravagant demands against others. Or, actors will learn to cooperate with those like themselves, but not the other type. In models of collaboration, agents will come to collaborate only with those like themselves, or else to collaborate with other types, but demand that their partners do more work for their author position.
This sort of possibility had been documented previously. (See Young (1993).) More surprising is that in many of the models we consider, the smaller the underrepresented population, the more likely it is that they end up being discriminated against. Justin first noted this sort of effect in bargaining and cooperative scenarios in an unpublished paper (Minority Disadvantage in Population Games). It occurs due to a discrepancy in the learning environment for minority and majority groups. Minority types run into majority types all the time, because majority types are, of course, in the majority. Majority types meet minority types only rarely.
As a result, minority types quickly learn to accommodate whatever behavior majorities are engaged in. In strategic scenarios, this often involves doing something relatively safe – taking an action that will guarantee some decent outcome, but avoiding a risky choice that might lead to a better payoff. Once minorities have learned to be accommodating in this way, majority types then slowly learn to take advantage of them.
The figure below shows an example of the sort of results we are describing. This figure shows outcomes for 10,000 simulations of populations with two conditioning types playing the collaboration game. Each bar shows the proportions of outcomes where actors learn bargain fairly, or else discriminate, against the other type. The smaller the minority population, the more likely it is that the majority learns to make unfair bargaining demands against them.
As we argue in the paper, this may help explain why some disciplines have so much trouble retaining members of underrepresented groups. It also might help explain why underrepresented groups tend to cluster into academic sub-disciplines. In our models, we find that when minority types end up with poor bargaining outcomes when collaborating, they learn to stop collaborating with those unlike themselves, but may continue to collaborate with other minority types.
The take-away? Even when seeming barriers to underrepresented groups in academia, like bias and stereotype threat, have been mitigated, strategic learning can still hurt minorities, and this is more likely for small minority groups. These outcomes occur whenever actors condition on types. Given what we know about human nature, it doesn’t seem likely that academics will be able to start treating people of all genders and races equally anytime soon. But this is one more reason to try!