Over at DailyNous there are two extremely eloquent guest posts on the PGR controversy; one is by Simon May, which is stirring reminder what philosophical nobility looks like. I happen to agree with most of it, although I am inclined to think that the statement that it is "impossible to overlook that so many women feature prominently amongst Prof. Leiter’s recent targets" is itself a consequence of selective perception. Lots of men are also among Prof. Leiter's targets; they have been for a long time. It's just that these many other targets have been less popular than some of the recent female ones that were the tipping point. (Yes, I know it's more complex than that, but still.) I know that Prof. May is extremely careful not to attribute sexism to Prof. Leiter ("Once again, this is not a question about his moral character—there is no suggestion of implicit misogyny here. But character is beside the point.") Even so, the not-subtle-sub-text here is: 'he's targeting women.' But even if one thinks that Prof. Leiter's actions contribute greatly -- because of the popularity of his blog and the perceived power derived from his control over the PGR -- to 'climate' issues for women in philosophy (and I am not unsympathetic to that line of argument, although as always the situation is complicated because he has also done a lot more than 99.9% of the other men in the profession to make those issues visible in the profession),* it's not because there are also women among his targets. For, even if there had been no women, he would be contributing to the (in the apt phrase of May) "vices of sexism." (And I am self-aware enough to recognize that as a polemical blogger and particular kind of boy-philosopher I may also be contributing in that way.) Women and feminists have been among his targets -- and in ways that contribute to climate issues --, but they have not been targeted because they are women and feminists (unlike the continental feminists, Creationists, Randians, Straussians, etc.). I say all of this not because I have sympathy for Leiter's attempts to brand his critics a 'cyber mob' or admire his demagogic rhetoric ("smear campaign,") but because it matters that we draw the right lessons [separation of power, no monopoly rankings, adoption of best practices in our rankings, etc.] from this whole sordid affair.
The other post is by Alex Rosenberg, who is one of the most important philosophers of economics and, you know, when he is at his best somebody I look up to immensely. Rosenberg does not just defend the PGR (he's in good company), rather, Rosenberg defends it come what may--not even deigning to acknolwedge the existence of known problems with the PGR:
About five years ago, Kieran Healy, a sociologist...subjected the PRG rankings to an analysis. The most striking finding of his analysis was that the rankings of the top 20 departments by philosophers from the top 20 departments was almost identical to the ranking of the top 20 departments by philosophers from the bottom 20 departments. Our discipline is more like mathematics in its social structure than it is like other humanities or social sciences. What’s more, Kieran’s findings revealed what sort of weightings philosophers give to the various subspecialties in their reputational rankings (alas, I learned, the philosophy of science is not in the top three—phil of mind, phil of language and metaphysics).
All this means that Brian Leiter has provided us with a reliable picture of our discipline, one we can use, to advise our students, our junior colleagues, and even to correct myths about our discipline cherished by our administrators.
Come on, Alex.
Really, Alex, you gotta be kidding me.
It may well be that (a) there is a "reliable picture of our discipline" (more about that below), but (b) it does not follow that there exists “construct validity” (Rosenberg introduces this technical term in his post), and (c) it does not follow that the methods cannot be improved, say, by (d) responding to constructive criticisms and (d*) systematically incorporating best practices.
On (a) Rosenberg fallaciously equates "the discipline" with what I call the "PGR ecology." The PGR does not even rank all known PhD programs in the Anglophone world -- something that ought to be corrected asap --, although it sometimes mention a few more along the way; it does not rank the vast majority of other philosophers that teach in non-PhD programs. Is it a reliable picture of the PGR ecology? I suspect that however imperfect and flawed it probably is reasonably reliable. (But see below.) Of the discipline? No way. The PGR has known problems in how it treats Continental philosophy, where if the relevant experts were asked we almost certainly would not see any stable measure (of course, asking those experts would also take us outside the PGR ecology, etc.), and it is especially weak in 20th century French philosophy. I suspect, Alex, you think that stuff isn't really philosophy, but that is really mere prejudice on your part.
On (b) to get construct validity, we would need to have a lot more independent testing -- Alex, this is method 101, no? -- than the test that Healy devised. (Here's a simple test: let's ask a quasi-random sample of non PGR rankers that work in liberal arts colleges and state schools, but that tend to hire from within the PGR ecology.) This is not a criticism of Healy, by the way. In fact, a lot of those tests could be devised by outsiders if data were shared (see below). Moreover, what is not known is to what degree evaluators/rankers get removed or gently discouraged from the pool of rankers (say prior to 2009) if their responses were outliers. (Pruning of one's sample.) Finally, Wheeler has raised important concerns about the claim that there is no variance in the past and again on the present post. (Wheeler and I have had high profile disagreements, by the way, so it's not like I am doing special pleading on behalf of formal philosophers.)
On (c) come on, Alex. Here are some obvious problems: in some of the sub-specialties there are very few rankers; In some sub-specialties it is not entirely clear if rankers are really judging the same thing in each category (I recall some debate over 18th century/Kant at one point); there are known problems with bias that are not being corrected; some categories (applied ethics) cover a LOT of ground whereas others are rather narrow; some categories are still missing in action; the raters are allowed to rank their undergraduate teachers and, I think, their supervisors if these have moved since they have obtained their "highest degree." Finally, the PGR ought to rank all PhD departments. (This is a partial list; for more links to criticisms see here.)
On (d and d*) the PGR's track record is not very impressive. The PGR does not have transparency in data (recall and here)--the PGR can be re-designed to respect privacy of rankers and still leave outsiders access to the data. Sadly, there is a long history of Leiter denouncing alternative rankings and folk that suggest improvements/criticisms (here).
So, to conclude, the PGR is defensible iff (i) the folk that run it show willingness to improve it in collaborative fashion and do not abuse the perceived powers that can be generated from it and (ii) those of us that favor the PGR and, perhaps, receive privileges/benefits from it are willing to make the harms and injustices that follow from it available for discussion. Principles in the vicinity of (i) and (ii) are just responsible philosophy of social science. Let's try to hold ourselves to the same standards we hold the economist, Alex?
Anyway, since I started to draft my post, both posts have generated great comments at Dailynous, so check them out!
*Obviously such noble actions are compatible with other vices.
I once saw a professor dance her critique of Derrida. I (strongly) suspect that Rosenberg is himself working on a performative argument. I made a joke to that effect on Facebook, but the more I think about it, the more obvious it is.
I've read enough of Rosenberg's work to think he is smarter than to believe that the arguments he's making here are good ones. Many point to the absurd discussion of property, but the construct validity argument is even worse as you've demonstrated. Since that's an argument he would never make in a discussion of sociobiology, I really do think he's pulling our legs. You can literally point to the chapter in his textbook on Philosophy of Science that demonstrates the error here. (Ok, I don't own the text, but it looks like the relevant material is spread over 9, 11, and 14.)
Or maybe I'm being too hopeful.
Posted by: Joshua A. Miller | 09/29/2014 at 06:47 PM
I'll self-indulgently reprise what I wrote at NewAPPS at the time about data transparency, since it still seems to apply:
The scientific analogy (as I understand it) doesn't license public, all-comers, release of the data. What it does license is that PGR ought to be willing to give its dataset, in confidence, to other researchers who are able to reassure them that they are (a) carrying out methodologically serious analysis and (b) trustworthy custodians of confidential data.
But calling for PGR to be willing to do that only makes sense if there is good reason to think that PGR does not already give access to its data under those conditions. I haven't seen any evidence that that's the case. (Such evidence would presumably be along the lines of someone saying: "I, person/group X, requested of PGR confidential access to their dataset to do the following study, and were refused without any good reason being given.")
Posted by: David Wallace | 09/29/2014 at 06:49 PM
David, perhaps, you try your out your own recipe on obtaining PGR data before you tell us what "there is good reason to think."
Posted by: Eric Schliesser | 09/29/2014 at 08:38 PM
Joshua, I went and re-read that article, and it is so incredibly over the top that sarcasm has to be some kind of possibility. But I don't think it really can be.
Posted by: Richard Heck | 09/29/2014 at 08:52 PM
Eric: I didn't tell you what there's good reason to think. I asked if there's good reason to think it, and said what I thought would constitute good reason. I don't particularly want to do data analysis on the PGR, so why would I want to "try out my own recipe"?
Posted by: David Wallace | 09/30/2014 at 02:29 AM
David there is overwhelming reason to think that the PGR has made no effort to reach out to its critics and have us help improve it. When folk have asked for data -- and this is amply documented -- they have been told it's private and that to share it would risk jeopardizing raters' privacy (and Leiter has repeatedly said this on his own site). But maybe you could get more out of the PGR than others so please try your own recipe.
Posted by: Eric Schliesser | 09/30/2014 at 10:12 AM
Eric: I'm not aware of a documented example of someone doing what I have suggested (approaching PGR with a particular research goal in mind and asking to see its data, in confidence, to investigate that goal). The only such approach I'm aware of (Kieran Healey's) was accepted.
This isn't intended to be a rhetorical point: I'm not internet-omniscient and I'm happy to be pointed to examples. But the thread you link to is concerned with something very different: making the raw data public.
Posted by: David Wallace | 09/30/2014 at 11:55 AM