Academic publication needs fixing. Even the 12% uncited rate for medicine seems large to me, particularly given what medical research costs. The one-third rate for social science and more than 80% for humanities are really troubling. But whatever we do, let’s preserve somewhere what’s good about academic articles—full descriptions of methods and expert evaluation. Dahlia Remler
Today during lunch with a rather prominent scholar (henceforth PS), we were exploring some of our methodological disagreements politely and respectfully. At various points we thought it may be helpful to appeal to some of our papers, including work (as it happens) unfamiliar to the other, in order to elucidate our positions. This entailed some pleasant digressions to summarize and make relevant this past work to the other.
After lunch, I decided to google one of PS's (previously unknown) papers that had intrigued me. I noticed it had 9 citations--respectable but not high by PS's standards. Because I publish a lot and am (ahh) not shy about self-citation (this seems to be a boy thing), most of my own papers have some citations, too; it does not follow they are read by others. In fact, one reason why I enjoy blogging so much is the instant gratification of knowing it attracts readers (and reader feedback in all its forms is cherished).
With scientific metrics and evidence of 'impact' increasingly important (in promotion, grant-making, etc.) those uncited papers are deemed "problematic" (see Remler above; I quote her because she has lots of very sensible things to say about the data that goes into such rhetoric and the underlying practices). But in reflecting on today's lunch conversation I came to see that even unread journal articles can have their use beyond the filling of lines on CVs and evidence of ('useless'?) 'productivity' or 'quality' at some evaluation time.
Beyond many other things, a journal article is also a discrete step in a scholar's public development. Before I get to that two quick qualifications: first, the 'discrete step' part may be an artificial boundary, but that's okay for what follows. Second, the 'development' part need not presuppose that a scholar knows where she is going with her oeuvre or that there is a linear development. All I mean is that the journal article contains the fruits of intellectual development that has been scrutinized -- let's stipulate -- by referees and (non-trivially) oneself at 'professional' standards. (Third quick qualification, by 'professional' all I mean, stuff that can be discussed in public with recognized attribution as meeting some disciplinary quality levels.)
To see what I am getting at, note that all genuine scholarship involves some serious investment of time and attention into arcane matters that do not pan out. It is a known fact -- although often forgotten -- that this will involve false starts, garden paths, and seemingly fruitless research. Even if we stipulate that such work also must involve research that does produce 'results,' sch results may, at first sight, and maybe even nth sight, not really be all that significant or relevant. For, such research is the intellectual reservoir on which all knowledge, including some of the very impact-ful stuff celebrated by many, floats. Sometimes it is even the necessary preparatory work for the stuff that catches fire (in PLoS, Nature, etc.) later for reasons that have nothing to do with earlier work. These things are hard to predict--research is supposed to be surprising, after all. It is an open question if society should invest in maintaining a reservoir of highly trained and very intellectual scholars, but as long as it does, we should also recognize that it is entirely foreseeable that lots of our work will not pan out or fail to have a social application (even among those of us that aim to have social utility--this post is not a point about the significance of 'pure' research).
The previous paragraph is completely compatible with the elitist thought that some limited number of folk write the limited number of highly cited papers (here you can nod sagely about 20-80 rule, etc.). And it is also compatible with the thought (although it may be controversial what it shows) that a trajectory of productivity and cited papers by a particular scholar predicts later impact (the Matthew principle and all that).
Okay, let me get to the point. Our scholarly individual intellectual development consists, in part, of discrete steps. These steps are made discrete, in part, by publication where we refashion our work in a way that it can be (in part) self-standing and treated as such by ourselves and others. We come thereby to points of view and even own them in some respects (our name is attached to them) by future selves. Later we may well forgot the details of the research that went into the paper and even the fine-grained elements of our own argument (etc.). But the act of drafting and evaluating our own work for publication, we come to treat it as a separable and discrete bit of our own labors (in a broad sense) that we make available to our peers and their judgment. (That's compatible with our other work being more interesting, better, etc.)
Even if our overworked and underwhelmed (by us) peers show no interest in our particular discrete steps in our particular development (and laugh with us at our pretensions of having an oeuvre or work that can be read in light of each other), it does not follow that our unread and un-cited papers are thereby without merit. Judgments of merit are only possible by reading the work (by folk who are trained to judge, etc.). It also does not follow they lack social utility. For our un-cited and unread work just is a somewhat privileged part and parcel of a particular scholar's long, never-ending apprenticeship in developing and furthering their expertise, skill, and knowledge (that may or may not have social utility at some point or another). For, in no field is, I think, social utility constituted by journal publication, or citation.