My friend “Jana” sent her most promising manuscript to a journal that we’ll call The Ivy League Business Review. She received immediate confirmation that it was received, although the e-mail did not indicate whether or when it would be sent out for peer review.
So she waited. And waited some more. After six months of waiting, Jana politely asked about the manuscript’s status. She didn’t hear back. Two months later, she e-mailed again but with a more urgent tone. This time she received a reply from the editor. The essence was, “Thank you for your submission. Although the paper seems promising, it does not adequately fit the scope of The Ivy League Business Review. We therefore did not send it out for peer review. Best wishes.”
Jana will be coming up for tenure in two years, and having her paper pointlessly stalled for eight months was a real setback. When she told me about this, it certainly made an impression. I thought, “Boy, I’m glad I know this because I’m certainly never going to send a paper to that journal.” But in point of fact, I was highly unlikely to ever send anything to that journal anyway, because my research doesn’t have anything to do with business. In fact, almost none of our colleagues (we’re both in psychology departments) would be likely to consider submitting anything there. So this potentially invaluable nugget of information would normally go no further.
How many of you can relate similar horror stories of patently unprofessional, or at least wildly inconsiderate, editing or reviewing? I’d guess most of us can, and many of the stories are probably worse. And I’d guess that we often have Jana’s problem of not being able to do much with our tale besides recounting it to a sympathetic listener, assuming we can find one.
Wouldn’t it be better if we could publicly share such experiences with all those who might benefit? That would promote accountability by journals and allow authors to avoid frustrating, career-damaging situations.
I suggest the development of a crowdsourced, “author reviewed” journal-evaluation Web site. The idea is that authors from various disciplines would share their experiences with particular journals, both negative and positive. There would be quantitative information such as time until receiving notice of being reviewed, time until receiving first review, total time from initial submission until final publication, and, of course, acceptance or rejection. And there would also be opportunities for rating or commenting on key issues, like the fairness and constructiveness of editors and reviewers and the efficiency of the journal’s production staff.
As reviews accumulated, it would be possible to make better decisions about where we would and would not submit our work. Authors would be able, if they chose, to eliminate journals with exceptionally high or low acceptance rates. They could forgo journals with slow turnarounds or predominantly negative editor or reviewer ratings. Ideally, this Web site would allow journal searches by many other criteria too, including subject area(s), impact factor, publication fees, open-access options, database indexing, publisher, review process (e.g., blind or not), etc. This platform would also allow our colleagues, including librarians and administrators, to better evaluate where we are publishing and what journals we most need access to.
Is such a site needed? After all, many journals advertise their impact factors, some even disclose their acceptance rates or speed of publication, and many of us have a decent feel already for the key journals in our field.
But I believe there is a substantial need because our information is incomplete and often vague: “Does that journal consider nonexperimental studies? I think I heard someone at a conference say that they no longer do.” We also don’t know about the information’s veracity: “Does Journal X really have an acceptance rate of 15 percent? How did they calculate that?”
Think of it this way. If you are trying to decide where to go for the best tacos in town, you have Urbanspoon and TripAdvisor to provide hundreds of ratings, many with rich descriptions. But if you want to find the best journal for your manuscript, you may have virtually no information. And, although I like tacos as much as anyone, I hope we agree the journal decision is far more important.
When I share this idea, one question people often ask is, “Won’t this site only attract reviews from those who have had miserable experiences?” I don’t think so, especially if reviewers have some kind of stable identity, even an anonymous one. Just as we discount claims about the “worst tacos in the world” if we see that BeanoBoy has slammed every restaurant in town, we will discount journal reviews from consistently hostile reviewers. Also, mechanisms might be added to allow journals to dispute hostile reviews.
Another question I often hear is, “Won’t the journals and editors try to prevent this? It could make them look terrible.” Again, I don’t see a major problem, because most journals perform well and would look forward to seeing how they stack up and where they most need to improve. Of course, journals with big reputations that treat authors poorly … yes, they will suffer. And they should.
Will authors contribute this kind of information? After waiting eight months, I know Jana would gladly spend five minutes broadcasting her story. And I suspect that, although most of our experiences with journals are not as notable, they still affect us enough that we would want to share them. Often, we would simply provide several ratings and a brief comment, such as “Extremely constructive feedback from editor and reviewer #2!”
The final question is one I can’t answer yet: What organization or people would provide the resources to make this Web site a reality? I’ve been struggling for months to find someone. No luck so far. Is anyone interested?
Robert Deaner is an associate professor of psychology at Grand Valley State University. Most of his current projects involve using evolutionary theory to explore sex differences in human behavior.