The Cost of Peer Review and the Future of Scholarly Publishing

As is being discussed a good bit around the academic blogo-/twittersphere this morning, Jennifer Howard reports in today’s Chronicle of Higher Education on a new report soon to be released by a committee organized by the National Humanities Alliance, entitled “The Future of Scholarly Journals Publishing Among Social Science and Humanities Associations.” This report seems to have a couple of compelling findings: first, that the per-article cost of journal publishing in the humanities and social sciences is more than three times as much as in the science, technical, and medical (a.k.a. STM) fields, and second, that this increased cost is due in no small part to the increased selectivity of those journals. Where the STM journals under study (which seem to be primarily the official journals of learned societies) have an acceptance rate of around 42 percent, the humanities and social science journals publish about 11 percent of submissions. Journal articles in these fields also tend to be about 50% longer, meaning fewer articles per journal issue. The tighter pre-publication filtering needs of these journals results in an extremely heightened expense for peer review in humanities and social science journals, resulting in a per-published-article cost nearly four times that of STM journals. And given that, as the Howard article notes, the author-pays model of journal funding will never work in the humanities, where the vast majority of research is either self-funded or funded by the author’s home institution, something else has got to change if journal publishing is going to remain feasible.

So here’s a wacky thought, one I’ve been writing and talking about for a while now: what if we stop doing pre-publication peer review? It’s of course the economics of print that require such gatekeeping — because there can only be so many pages and so many issues of any given journal, we end up only being able to publish a little over a tenth of the material submitted. But if the primary venue for the journal is the internet — and really, honestly, how many of a journal article’s readers come to it first through the print version? — then those economics radically shift. We’re no longer constrained by the bounds of what we can print and ship, but instead by what we can put into our publishing format. In that case, we’d be much better served, I believe, by eliminating pre-publication peer review. Perhaps the journal’s editorial staff reads everything quickly to be sure it’s in the most basic sense appropriate for the venue (i.e., written in the right language, about a subject in the field, not manifestly insane), but then everything that gets past that most minimal threshold gets made available to readers — and the readers then do the peer review, post-publication.

It’s those readers, after all, who are the article’s true peers, not the two or three editor-selected reviewers who now give the article the up-or-down vote. It makes no sense for the labor of the same small set of reviewers to be drawn upon again and again when there’s the potential for more broadly and fairly distributing that work. And it makes no sense for article publishing to be subject to the crazy delays that now hold a lot of work hostage, first waiting for the peer reviews to come in, and then waiting for the journal’s backlog of accepted articles to clear out. Why shouldn’t readers be able to read and respond to that work right away, and why shouldn’t that reading and response constitute the article’s peer review?

This of course depends on the assumption that readers will actually bother to respond — that they’ll be sufficiently committed to the maintenance of the collective enterprise of the publication that they’ll take the time to comment on and review submitted articles (as opposed to the mostly anonymous peer reviewers of today, who have proven themselves willing to do that work). One way to ensure such participation might be a pay-to-play model, in which readers are asked to do a certain amount of reviewing in order to earn the “credit” required to submit an article.

But another, even more basic, assumption made by such a model is that the “journal” function will continue to exist in a fully networked publishing model. After all, there would be no particular point in waiting for some arbitrary moment to release an “issue,” when new material could be made available as it is ready. A more likely scenario is that we develop either institutional or disciplinary publishing systems that function like blogs, featuring new articles (or texts of whatever length, as those restrictions fade away as well) as they appear, but keeping the archives available and in play in perpetuity.

This is the kind of publishing model we’re attempting to build at MediaCommons. It’s been very slow in developing, but the tools we need to put such peer-to-peer review in place should be ready for testing very soon. I hope that all of you with a vested interest in developing new publishing models — in ensuring that scholarly publishing in the humanities can survive — will keep talking about these issues, will join us when we start testing our new systems, and will find ways to help us build a working structure for the future.

13 thoughts on “The Cost of Peer Review and the Future of Scholarly Publishing

  1. Couldn’t agree more. As Shirky puts it in *Here Comes Everybody,* we need to “publish first, filter second.”

    An added bonus is that if such new publishing models do arise, the junior faculty who adopt them first will probably be rewarded, because the evaluations of their work will be public (even if anonymous). Seems to me that P & T committees would appreciate access to that kind of information.

  2. I’ve been thinking about this paradigm in relation to Flow — we’ve thought about putting up a “post clicked” or “most commented” section to guide readers to the articles that have been ‘democratically’ selected as most interesting, but the problem with an all-accessible internet journal is that the ‘most clicked’ articles are those dealing with porn (one on 8-Bit Atari Porn, the other on Palin porn) and the most commented article has turned into an impromptu gathering/meet-and-greet site for fans of a Spanish-language telenovela. What to do?

  3. Annie – one way to avoid the problems you mention is to follow Google’s model of not only counting links, but weighting links by their own linked-to-ness. If you have profiles on MC or Flow, they will gain in prominence as people’s postings & comments are rated by their peers. Then the “top postings” will emerge not just by sheer numbers of hits/comments, but by the relative prominence of the readers & commenters. Make sense?

    And as always, I second what Kathleen said…

  4. Kathleen,

    I don’t have subscription and thus cannot read the original article, but I do wonder how much of a difference between natural sciences/medicine vs humanities/social sciences may be in entirely different publication models.

    As someone who went through grad school when the pre-professionalization moved from few Ph.D. having published to that becoming expected, I saw too many of my colleagues send out material that no one should have considered (or, really, did consider) publishable. In turn, the time many journals take to publication is ridiculous, but necessary due to too many barely rough drafts clogging up the peer review process. Since then, of course, things have gotten even worse, with undergrads expected to publish.

    If anything, I’d like to see us as an intellectual culture moving back from quantity to quality. Of course, no article can (or should) ever be an ideal(ized) abstract object, but the process of revision and peer review and copyediting is a helpful one that I would hate to have disappear.

    FWIW, as editor of a journal that simultaneously remains old school in its emphasis on peer review yet as an online journal has a time to publication that runs under four months, I think there are ways to create quality work (and yes, nurture young scholars who may indeed need the critical feedback that sometimes their own teachers may be unable or unwilling to provide) without thinking of it as gatekeeping as much as apprenticing, maybe?

    Finally, I’m not sure popularity, even in the way Jason envisions it via link rating is the best way to foster academic research. Let’s face it, not everyone does (or should) research sexy topics. And yet that work is of vital importance. There’s a reason academic publishing tends to not work on a sales model, namely that research for its own sake is something we should cherish and encourage. So what if only three people would ever read my friend’s paper on Aelfric and Christian Heroism (yes, it’s actually lying on my to read pile :)? I’d hate to have google ranking evaluate academic value in research…

  5. As the editor of an online, zero-budget journal, to steal from Marx, I resemble these remarks. (That was Groucho I meant.) There are a number of disciplinary and quasi-disciplinary networks in the social sciences, such as NBER and SSRC, and the reviewing functions vary. As you’ve written elsewhere (I think in a MS somewhere online…), the key question in developing an online disciplinary archive is how to hit the sweet spot of a critical mass of scholars participating, a review mechanism that emphasizes substance and encourages participation, a sustainable economic model, and a (preferably self-selecting) anti-crank tendency.

    Media studies has an advantage in the first area, but anyone trying to build something in older fields would probably want to build it first among grad students and enough senior scholars who are willing to lend some gravitas/status to overcome inertia. The review mechanism is something that will inevitably evolve; as with the various waves of social-networking technologies thus far, it’s best to assume that some people will put in gazillions of hours on tools that will *probably* be obsolete within 2 years as something better becomes available. The economic model would probably combine a reviewing economy *and* nominal annual fees (say, $15), which will provide a little support and help with the anti-crank selection.

    I’m glad you commented on the Chronicle story so quickly–lots to think about!

  6. As Kristina’s coeditor, I second her remark that it’s possible to combine old and new schools of thought, and I also suggest that at this moment of transition, this is one way to encourage acceptance of online publications. Internet-only publishing is currently perceived as less prestigious than print. Double-blind peer review is a gold standard (even if it’s not all that blind, and even if it doesn’t always work that well), and hewing to it is reassuring to our authors and to those who judge their work: hiring, tenure, and promotion committees.

    As someone who works full-time in the STM market, I say that the humanities could learn a lot from it, notably dual publication (online and print), quick turnaround times (they print received/accepted dates!), and, occasionally, useful ideas about copyright (Creative Commons, for the PLoS folks and others). But the STM market is built on a scaffolding that is not good for the long term, as it relies all too heavily on a few large corporations who charge huge amounts to libraries and restrict access, although Elsevier, for one, is attempting to add value (http://chronicle.com/news/index.php?id=6808&utm_source=pm&utm_medium=en). The STM market also relies on authors paying page fees, which can amount to literally thousands of dollars–those color figures on special paper don’t pay for themselves!–but of course researchers get grants, and the page fees are usually paid out of that. This would obviously never work for the humanities.

    As a copyeditor, I say, you really, really, really (trust me, REALLY) don’t want to see unedited articles. Judging by the peer reviewers’ comments I see, errors of any kind distract greatly from accessing content. Many peer reviews I see end with line-by-line enumerations of every error that forced the peer reviewer to NOT focus on content. I’ve seen calls for printing documents as they are submitted, even for ignoring that thing called “style” and presenting, for example, references in any style the author randomly chooses. My response to that is twofold: first, ARE YOU INSANE, and second, YOU HAVE CLEARLY NEVER TRIED TO TYPESET THAT.

    As someone employed in the field of publishing and a critic of restrictive copyright, I would suggest that journals remain a bastion of edited, vetted, revised content. In the field of English, we call this process “writing.” Journals foster writing for a particular audience. If writers want to know how they’re doing, they ought to pass their paper around to readers privately first, then blog it for comments. Then they ought to be able to submit it. Unfortunately, most journals (including the one I edit) will not take blogged content, because this is considered previously published. This is one thing that ought to change, and in fact, I don’t think this ridiculous rule will last long at all.

    Publishing in a journal ought to actually mean something: that you are contributing to a larger discussion for a particular audience. You can call it this vetting “gatekeeping” if you want. Why is it bad? There are a lot of gates out there. Surely one of them is a gate you want to walk through.

  7. Thanks for the link, Joe!

    Karen, my knee-jerk reaction is to resist the notion of “gatekeeping” in general, but I do see that some thresholds to entry are always there, and can be valuable. I don’t want to discount, however, the significant threshold to entry that the open expression of opinion by one’s peers represents. There’s a good bit of evidence to suggest that open, online peer review processes actually improve the quality of submissions to journals (see, for instance, Koop and P??schl) — which stands to reason: who wants the world to see one’s sloppy errors? I’ve also got the sneaking sense that some of our utterly well-intentioned editorial practices also breed a kind of laziness in authors. Some of it’s that, as you point out, posting drafts of articles for comment is considered “prior publication” by many journals, so authors often can’t get the feedback they need before submission. But some of it, too, may be the unconscious assumption that someone else will catch the errors in writing, thus reducing the author’s need to be careful. Part of what I’m after here is suggesting that we need to take responsibility for our own publishing, as the system that we’re relying upon may not be around terribly much longer.

    I would never claim STM publishing as anything like a model for the rest of us. But I also very strongly resist the claim that peer review of the sort that has been done for the last several decades is the best possible form. And making claims for traditional peer review based on institutional acceptance is really putting the cart before the horse. We need to find ways to demonstrate to our colleagues and our administrations that the ways we want to do things are in our — and their — best interests, rather than hewing to a status quo that does not, and cannot, continue to work.

Leave a Reply

Your email address will not be published. Required fields are marked *