[This is a guest post by Cailin O’Connor and Justin P. Bruner.--ES]
In a new paper – Diversity in Epistemic Communities – we use evolutionary game theoretic models to argue that even in the absence of stereotype threat, explicit bias, or implicit bias, underrepresented groups in academia can end up disadvantaged by the dynamics of social interaction. This can happen whenever actors 1) condition their behavior on types and 2) learn to act in their own best interest. As we argue in the paper, the real devil here is type-conditioning. As long as it occurs in a population, you can get these sorts of disadvantaged outcomes.
Perhaps the most striking thing about our results is that this disadvantage can arise in a group of well-meaning agents, who simply learn over time to benefit themselves. It is not clear that these agents need even be aware of the fact that they are learning to behave in ways consistent with discrimination and bias. In the rest of this post, we’ll explain how these social dynamical effects can occur.
Imagine an academic community with two types of researchers – men and women, say, or white researchers and researchers of color. Further suppose that one of these types is in the minority. (This is not a stretch. According to Norlock (2006), women account for only about 20% of tenured philosophers. Botts et al. (2014) calculate the proportion of black philosophers in the US to be 1.3%.)
Now imagine that these academics are engaged in various strategic interactions in their day-to-day lives. By strategic, we mean interactions where actors care about what their partners do. This also is not a stretch. It may not seem immediately obvious, but academics are constantly engaged in strategic behavior. Three sorts of strategic situations that we think are deeply important to academics are bargaining, cooperation, and collaboration.
To clarify this point, we’ll describe scenarios where academics do all three of these things. Bargaining sometimes occurs explicitly in academia, as when academics bargain for salaries and benefits, or for grant funding. Far more often, it occurs implicitly to divide joint work. Every time academics plan a conference together, advise students, organize a department, run a journal, etc. it must be decided who does what. If you assume (reasonably) that everyone prefers more research time (or more free time), there is a sense in which such agreements often have winners and losers.
Cooperation, likewise, is a central part of academia, often in the same sorts of situations just described – conference planning, editing volumes, etc. Group work of these sorts can be beneficial compared to individual work (imagine running a large conference alone). At the same time it may be risky in that a bad cooperative partner might fail to do her agreed upon job.
Lastly, many academics engage in research collaborations. These engagements often have a somewhat complicated strategic structure. Usually academics have to first pick collaborative partners (or choose not to have collaborative partners at all). Then once partners have been chosen and a collaborative research project has begun, academics have to bargain, implicitly or explicitly, to decide two sorts of things. First, who will do what work for the group project. And second, who will be best positioned, in terms of author order, to receive rewards for the project.
In our paper, we analyze these scenarios by modeling them as games. A game, in the game theoretic sense, is a simple representation of a strategic interaction that specifies 1) who does the interacting, 2) what sorts of things they can do, and 3) what the outcomes are to the actors for various sets of choices. The games we use to represent bargaining, cooperation, and collaboration in academic settings are the Nash demand game (or ‘divide-the-dollar’), the stag hunt, and the collaboration game respectively. We won’t describe these games in detail here, though in the paper we introduce them from first principles.
How do individuals behave in these strategic scenarios? Imagine the researchers in our academic community learn, over time, to repeat choices that benefit them and refrain from making choices that do not. And suppose that the more beneficial a choice, the more likely it is an academic learns to make it. Empirical work indicates that this is a reasonable (if highly idealized) description of how people learn to change their behaviors over time.
There is one last crucial element to our models, and this is that our researchers can condition their behavior on the type of partner they interact with. In other words, each person has the choice to treat men and women, or black and white, academics differently. This, again, mimics behavior seen in real academic populations. To give just a few examples, when presented with identical CVs, researchers are more likely to hire male candidates, and more likely to offer them more money (Moss-Racusin et al. (2012)). Black applicants are less likely to receive NIH funding (Ginther et al. (2011)). (And the list goes on.)
What happens when all this comes together? The first thing to note is that in such models, actors often learn what can be thought of as discriminatory behaviors against researchers of the other type. For example, it is common for actors to make fair bargaining demands from those like themselves, but extravagant demands against others. Or, actors will learn to cooperate with those like themselves, but not the other type. In models of collaboration, agents will come to collaborate only with those like themselves, or else to collaborate with other types, but demand that their partners do more work for their author position.
This sort of possibility had been documented previously. (See Young (1993).) More surprising is that in many of the models we consider, the smaller the underrepresented population, the more likely it is that they end up being discriminated against. Justin first noted this sort of effect in bargaining and cooperative scenarios in an unpublished paper (Minority Disadvantage in Population Games). It occurs due to a discrepancy in the learning environment for minority and majority groups. Minority types run into majority types all the time, because majority types are, of course, in the majority. Majority types meet minority types only rarely.
As a result, minority types quickly learn to accommodate whatever behavior majorities are engaged in. In strategic scenarios, this often involves doing something relatively safe – taking an action that will guarantee some decent outcome, but avoiding a risky choice that might lead to a better payoff. Once minorities have learned to be accommodating in this way, majority types then slowly learn to take advantage of them.
The figure below shows an example of the sort of results we are describing. This figure shows outcomes for 10,000 simulations of populations with two conditioning types playing the collaboration game. Each bar shows the proportions of outcomes where actors learn bargain fairly, or else discriminate, against the other type. The smaller the minority population, the more likely it is that the majority learns to make unfair bargaining demands against them.
As we argue in the paper, this may help explain why some disciplines have so much trouble retaining members of underrepresented groups. It also might help explain why underrepresented groups tend to cluster into academic sub-disciplines. In our models, we find that when minority types end up with poor bargaining outcomes when collaborating, they learn to stop collaborating with those unlike themselves, but may continue to collaborate with other minority types.
The take-away? Even when seeming barriers to underrepresented groups in academia, like bias and stereotype threat, have been mitigated, strategic learning can still hurt minorities, and this is more likely for small minority groups. These outcomes occur whenever actors condition on types. Given what we know about human nature, it doesn’t seem likely that academics will be able to start treating people of all genders and races equally anytime soon. But this is one more reason to try!
I have a possibly embarrassingly-simple question.
Why does the strategic behavior described here not count as a form of implicit bias?:
"researchers can condition their behavior on the type of partner they interact with"
This is a kind of discrimination without bias, but that still revolves around social categories that people fall into? I think I'm missing a step somewhere. Thanks in advance for any help.
Posted by: Stacey Goguen | 03/02/2015 at 04:44 PM
So, suppose that the subjects in this model do not start out differentiating between types *at all*. Then, what is the result? Will there still be bargaining benefits on the part of the majority?
It seems to me that there have to be some initial shocks in the system to differential treatment, before the minority accomodates and the majoritay exploits, right?
So your conclusion stands that, now that we have such type bargaining benefits, here is another mechanism that will add to keep the equilibrium in an 'unfair' state. What you don't have is an explanation as to how come things are the way they are.
(Which is not a criticism at all; this is just what the explanatory force of such evolutionary models amounts to).
Posted by: Bruno Verbeek | 03/02/2015 at 04:47 PM
I see that the explanation offered here is a direct explanation of discrimination that does not flow through implicit bias. Very interesting stuff.
Strategic learning seems likely also to contribute to implicit bias (and thus, perhaps, to further discrimination that is caused by implicit bias).
Suppose I come to hold this belief and engage in a pattern of behavior that reflects it: "Xs belong to a type such that I don't have to give members of this type as much compensation for their labor as I would have to give to Ys." It seems likely that I will also, over time, come to hold other beliefs that explain why Xs are exploitable in this way -- that Xs are less desirable as collaborators, that the quality of their work is inferior, etc.
Will the resulting implicit bias simply be epiphenomenal, a way of rationalizing the level of discrimination that I am already engaged in? Or will it actually contribute to further discrimination? I tend to think the latter: it is a new factor that further depresses the willingness to pay Xs fairly for their collaboration. I'm interested to hear about what the authors and others have to say about this. (I have read only this blog post and not the paper. Sorry if this point is addressed there.)
Posted by: Sherri Irvin | 03/02/2015 at 07:36 PM
Thanks for this, Sherri!
We don't address this possibility in the paper (that patterns of learning could lead to discrimination could lead to implicit bias could lead to further discrimination). It seems pretty plausible to me.
We've found it tricky writing the paper to separate behavior consistent with bias from psychological bias. Obviously these two things can come apart, and our models only represent *behavior*. Understanding the causality between these two things would be a separate (interesting) project.
Posted by: Cailin O | 03/02/2015 at 11:30 PM
Thanks for the comment, Bruno. Maybe I missed something in your comment, but this paper definitely is an attempt to explain how things come to be ‘the way they are.’ We begin with a mixture of agents, some condition on type while others do not. We then demonstrate that in many cases everyone slowly learns to condition on type and that the minority group is typically discriminated against. The aim is to show not only that minorities can be extremely disadvantaged in a wide class of circumstances, but also how such discriminatory norms can emerge in the first place (and moreover how the population can all come to learn to condition on type). Apologies if this sketch is confusing -- this is all explained more thoroughly in the paper.
That’s a really interesting suggestion, Sherri! I agree that the resulting implicit bias will have additional negative effects, further cementing the agent’s unwillingness to pay Xs fairly, as you put it. I addition to this, though, the new bias against minorities may make it even more likely that discriminatory norms that disadvantage underrepresented groups emerge in other, different bargaining and cooperative contexts. So, the establishment of a discriminatory practice in the workplace that disadvantages underrepresented minorities may make it easier for similar sorts of arrangements to emerge at the home or in the public sphere. This of course is all speculative, but nonetheless worth following-up on!
Posted by: Justin Bruner | 03/03/2015 at 05:02 AM
Dear Stacey,
Your question is very apt. What we mean is that while players condition on types, there is no underlying psychological bias that pushes them to do it one way or another. Players are just as happy demanding more or less of another type, as long as it benefits them to do so.
Then, as we discuss, the players develop to states where one type systematically treats the other in a certain way that is consistent with bias. As Sherri points out this sort of systematic disadvantaging behavior could very well then lead to psychological biases against another type.
Of course, in the real world biases are always at play, so you can't so neatly separate the sort of learned behavior we see from behavior based on psychological biases.
Hope this helps clarify our thinking!
Best,
Cailin
Posted by: Cailin | 03/03/2015 at 10:24 PM
If this mechanism is at work, would it help (i.e., reduce discrimination) if people were rewarded for behaving fairly? I think, if the agents (OK, the academics) learn to repeat choices that benefit them, and you create an environment were behaving fairly is rewarded, a sufficiently high reward could counteract the benefits from discriminating against a minority, right?
Posted by: Katinka Quintelier | 03/03/2015 at 11:12 PM
I'm wondering how your paper compares with the models of in-group/out-group types that were used in the 70's (and later). One recent one using the in-group/out-group approach seems especially germane: " Evolution of In-Group Favoritism" by Fu et al. Just thought you might like to know of it, if you hadn't already. It is open access: http://www.nature.com/srep/2012/120621/srep00460/full/srep00460.html
Posted by: S G Sterrett | 04/20/2015 at 09:12 AM
Dear Susan - Thanks for the tip! I hadn't seen this paper.
Dear Katinka - Yes, if there is a way to engineer academic environments so that fair behavior is rewarded (or just enforced, if that is possible) you shouldn't see bias of the sort that arises in our models.
Cheers,
Cailin
Posted by: Cailin | 04/21/2015 at 06:58 PM