Commentary

Academic Publishing: Toward a New Model

Michael Morgenstern for The Chronicle

May 18, 2016

The web, we all thought, was going to transform academic publishing. At the very least, it would make research far more accessible, lowering the cost and expanding the reach of publications. At most, it would fundamentally alter the nature of research itself, making it far more collaborative. In either case, though, academic publishing as we knew it was doomed.

Now, a decade later, as the web has fundamentally transformed so many areas of our lives, academic publishing is one area upon which its impact has been only modest at best. There are, it is true, a few open-access journals and many academics maintain blogs, but contrary to expectations, journal costs have soared and our writings remain perhaps less accessible, locked behind paywalls while libraries forgo buying print versions. While it is not difficult to understand why this has happened, a solution to it has been elusive.

Academics want their work to be widely accessible, but even more than that they want tenure, promotion, and raises. Most institutions base their evaluations on peer-reviewed publications, and they rely on the publishers themselves not only to disseminate research but also to maintain a credible peer-review system.

To make research more accessible, separate the review and dissemination processes.
While in theory universities could conduct their own peer reviews (as they do to some extent in seeking outside letters, although the writers of such letters in my experience often fall back on, for example, the number of peer-reviewed publications by a candidate), in practice they outsource this task to the publications, upon whose judgments they base their own assessments.

When faced with a choice of publishing within the traditional models or in an open-access venue that may have little (or, rightly or wrongly, suspect) review, most academics choose to sacrifice the potential of a larger audience for more certainty in furthering their careers.

How, then, might we rethink academic publishing to increase accessibility while maintaining the benefits of peer review? More important, how might we do this while recognizing the fundamental dual realities that (1) universities are already too stretched to devote significant resources to peer reviewing and (2) publishers are companies whose right to thrive financially should be respected? One solution is to cut the Gordian knot of review and dissemination.

I propose that we do this by delegating the responsibilities for peer reviewing to the professional, or "learned," societies. In this model, authors would submit their work to one or more of the professional societies most appropriate for that work. The societies would oversee the peer review and give accepted works an imprimatur. Authors could then shop their works with imprimaturs to different publishers, which would be in the business of dissemination rather than evaluation.

Publishers might then set their own standards for publication (they might, for example, require an imprimatur from a particular society, or from two societies) and edit for style. A publication, though, would be only as good as the content it could attract, and in order to attract good content, publishers would have to offer authors better incentives.

Such a system would have many potential benefits. Professional societies are, by their very nature, in a better position to run a peer-review process than are publishers. They have fewer potential conflicts of interest, and they have better command of and access to the experts in the field. Our current system’s review process is often hobbled by a small pool of academics willing to review; many refuse because they believe that the system is exploitative.

If, on the other hand, professional societies were to take responsibility for reviewing, more academics would view that task as a genuine service to their colleagues. The result would probably be a larger and happier pool of reviewers more willing to give the work of their colleagues serious consideration.

Many, perhaps most, works of research could conceivably be submitted to more than one professional society for evaluation. The evaluation itself could take many different forms: It might judge an article to be publishable or not, for example, or to give a score on a five-point scale.

What is important is that the professional societies decide on a common form and then operate anonymously but transparently. How many articles, for example, were submitted this year, and how many were judged publishable? Are there different rates based on the gender or race of the authors?

Such statistics, which are hard to come by in our current model, would help us to better identify and root out systemic biases. They would also allow the universities to do a better job of understanding the results of a peer review and thus assessing their faculty members.

Rather than serving as gatekeepers, publishers would be in the position of wooing authors. Publishers might vie for an article, for example, that has multiple high-level imprimaturs. Authors could then choose based on a variety of incentives that might be offered. One publisher might offer cash but retain all future rights to the work; another might offer less cash but allow the author to maintain rights; yet a third might offer no cash, but because the price of the journal is low, the work would be more broadly accessible.

The producers of the content would have far more power than is the case today. In fact, at that point authors might decide to simply self-publish on a blog or the like with notice of the imprimatur (a digital badge?), secure that for purposes of institutional review the work has been vetted.

Such a system, of course, would have a cost. Each professional society would have to create a robust referee system. This might be supported by fees that authors pay for each work they submit for review (perhaps subsidized by each author’s university) or, preferably, by increased dues across the board, with membership in the organization a necessary precondition for using the review system. Each of those ideas has its benefits and weaknesses, but either could work.

As a whole, this system would cost less than we now pay for the review and dissemination of research; increase its accessibility; help to make university evaluations more transparent; and allow academics to maintain more control over their research. It is a system that maintains the highest standard of scholarly scrutiny while delivering on some of the promises of digital publication. Whether that is to happen, though, is largely in our own hands.

Michael Satlow is a professor of religious studies and Judaic studies at Brown University. His latest book is How the Bible Became Holy (Yale University Press, 2014).