A proxy is an indirect measure. Sometimes a measure can have two functions; for example, you can measure X in order to (or accidentally) indirectly track Y (where X is something interesting in its own right). Something is a decent proxy (X) if it will co-vary with Y. Unless one has very well confirmed background theory, it’s very hard to tell if a proxy is reliable (accurate, etc.). Absent such background theory, one way to tell if your proxy (X) is reliable, is to measure the target system Y directly, say, periodically (and – let’s stipulate – with greater difficulty and expense). Of course, if one can measure Y directly and reliably without a proxy measure, then the proxy measure becomes dispensable unless X holds out independent interest. Ideally, direct and indirect measures work in tandem so that lacunae in each become visible and modelers and end-users of the models get a clearer sense of social reality. Of course, among intentional systems with feedback loops the measures themselves become part of the causal nexus and may well play some role in shaping the evolution of the system. So, that a measure X can be a proxy for Y, and play some role in the future development of social ecology (Z).
With that jargon, let’s turn to professional philosophy (recall this post, in particular).
The Philosophical Gourmet rankings (hereafter PGR) are said to be "primarily measures of faculty quality and reputation" of selected institutions and indirectly measures of employability. The PGR aggregates expert judgments by way of a survey of selected, expert faculty. In addition to measuring quality, one of the main purposes is to guide student choices in graduate education. Several tabs in the report are directly related to and informative about graduate education, choosing graduate departments, as well as job placement (and "employment"). They are, thus, also a proxy for employability. This was reiterated during the past weeks by Brian Leiter, who claimed in a series of (partly polemical) posts that there is “much correlation” between “PGR rank and overall tenure-track placement” (here), and that the PGR is an indispensable measure if one is interested in “academic employment” (here, read also Leiter’s important and sensible caveats).
As an aside, if the PGR indeed measures quality, and if it indirectly tracks (even predicts) hiring patterns, then we might also be inclined to think that the labor market for professional philosophers functions very well. One explanation for this may then well be the existence of the PGR, which provides prospective employers with crucial information (that would otherwise have been left un-aggregated) and, hence, becomes part of the causal nexus. One need not be a Hegelian (or Leibnizian, or Smithian) to appreciate the cunning of history that a self-proclaimed admirer of Marx is, in part, responsible for guiding the invisible hand of the market-place by aggregating expert judgments. Only in America, as my grandmother used to say. (I return to this below.)
The Jennings rankings track reported hiring results (see here for her aims). The Jennings rankings were made possible, in part, by the past efforts of Brian Leiter to encourage greater publicity in placement and hiring data. In addition to aggregating reported hiring patterns, my old NewAPPS colleague, Carolyn Dicey Jennings (CDJ), also estimates “the total number of candidates seeking employment and to calculate an approximate overall placement rate.” (here) If complete (and reliable) data were available then the Jennings rankings would measure past employability directly. Unfortunately, departments have incentives to prevent this from happening, so it will be interesting to see the strategies that can be developed to encourage greater transparency. Because aggregating the data is still time-consuming and difficult, the Jennings rankings are a trailing measure. This is one reason why CDJ emphasizes what she calls "recency" (see here and here; again the context is, in part, polemical); presumably, all other things being equal, departmental placement rates provide some guide to employability. So, as is, the Jennings rankings are also a proxy (and sensitive to estimates).+
In an ideal world, Leiter and CDJ would use the PGR ranking and the Jennings rankings to improve each.
As I noted (based on this and this), the Jennings rankings reveal that even if one grants Brian Leiter that there is great correlation between PGR ranking and employability in the professional philosophy labor market, the PGR is an imperfect proxy to employability.++ For, first and most importantly, the PGR omits departments that remain competitive in the wider academic job market (outside what I have called the ‘PGR ecology’). Even though the data is incomplete, this remains a robust result simply because it makes visible real patterns of hiring less detectable with the existing PGR measures. (Given that even the Jennings rankings omits the contribution of non-English philosophy departments – some of which intersect in non-trivial fashion with PGR ecology (recall Mohan’s post) --, both PGR and Jennings probably fail to track the international professional philosophy academic job-market.) Moreover, a preliminary, second more tentative consequence of the Jennings rankings is that some PGR departments do better than one might expect given their relative ranking in the PGR; I am less confident about this claim because it may turn out that a richer, more complete data-set (or different time frames) will vindicate the PGR rankings.
The first revelation suggests that if one sticks to the idea that PGR measures quality, then the academic job market in professional philosophy is less quality tracking than initially thought. A modest – but unfortunately highly charged – hypothesis to explain this is to suggest that the overall rankings of the PGR are imperfect when it comes to work in 19th and 20th century continental philosophy, American pragmatism, Philosophy of Race, Catholic philosophy, and Feminist philosophy (etc.); all topics that fit imperfectly with the ways philosophy is practiced within the ‘core’ of the PGR ecology (by ‘imperfect’ I do not deny that there are analytical feminists, analytical Thomists, etc.), but key to the self-identification of departments like Saint Louis University, Fordham University, Vanderbilt University, Dalhousie University, University of Oregon (etc.). Some of these departments do show up in the specialty rankings of PGR, of course, but as Kieran Healy showed in the past, the overall rankings of PGR are not equally influenced by individual specialties (see here and here). Not surprisingly, my old NewAPPS colleague, John Protevi, has argued that in the past the PGR was poorly designed to track developments in so-called 20th century continental philosophy; Protevi may, perhaps, feel vindicated by the market data (it would be nice to know more about AOS of folk hired without the PGR quality-stamp-of-approval).
A second, related, not mutually exclusive hypothesis with the first to account for both revelations is that the academic job market in professional philosophy is not fully competitive (in the sense that PGR quality tracks employability). I offer four possible causes: first, there may well be hiring-networks; second some departments specialize in niches that have special appeal in the job-market (Pittsburgh HPS comes to mind); third, individual supervisors and committee members are more important than department affiliation; fourth, university reputation may trump department reputation (as Leiter puts it, all else being more or less equal, "brand name" universities still enjoy a slight advantage in job placement.”) These are not independent hypotheses (a hiring network may specialize in some niche of philosophy and may track ‘university brand,’ etc.). As David Chalmers notes, the “philjobs appointments database” may evolve “in such a way that all sorts of different statistics about placement will be easily measurable and available,” and so we may start to make more fine-grained causal analysis before long. One way that the Jennings rankings may well evolve is to track individual supervisors/committee members (and, perhaps, letter writers), so that one can also evaluate the relative contributions to employability of supervisors and departments.*
+Critics of the Jennings rankings charge that her measure does not do justice enough to different kinds of jobs and different career expectations. To note this is not to deny that CDJ is admirably transparent about her values/aims. One may hope that the success of the Jennings rankings will generate better data so that alternative measures that incorporate other values can be developed either by CDJ or her critics.
++Hopefully, in accord with best research practices/ethics, Leiter will start to allow outsiders access to his data so that they can run their own statistical analyses with them. (This should be arranged in such a way such that confidentiality issues are respected.)
*I am not welcoming this trend. Obviously, this might also increase the (deplorable) star-obsession in the profession.
Comments