Crowdsourced Review Probably Can't Replace the Journals

posted on Feb 12th, 2024

This post is the first in a series about reforming academic publishing:

  1. Crowdsourced Review Probably Can’t Replace the Journals
  2. Why isn’t Preprint Review Being Adopted?
  3. How Can We Help Journal Editorial Teams Escape the Commercial Publishers?

Two years ago, I started a journey into academic publishing. I imagined using a reputation system to replace the journals with crowdsourcing. The reputation system would match reviewers to papers, incentivize good faith review, and identify bad actors. It wasn’t clear whether it would work in practice, but I wanted to find out.

I spent a year doing user research, building a beta (peer-review.io), and working to get people to give it a shot.

I am now convinced that it’s not going to work.

While some of the reasons are specific to the reputation based approach, most impact any attempt to crowdsource peer review. There’s a brick wall standing in the way of any attempt to move academic publishing outside of the journals.

A photo of stacked journals.

What is crowdsourcing in an academic review context?

Before we dive into the reasons why crowdsourcing probably won’t work, let’s get some definitions in place.

In traditional academic publishing, the journal editors act as the organizers, moderators, and facilitators.

Crowdsourcing is any system where technology provides everything needed for scholars to self-organize. It is any system of academic review where the reviewers self-select and self-organize using technology without needing a faciliator, organizer, or editor.

What is it that we’re trying to replace with crowdsourcing?

Journal’s editorial teams are doing quite a bit of manual labor, some of which is very difficult to replace with technology.

The work of journal editorial teams includes:

  • Filtering spam and amateur work.
  • Facilitating multiple rounds of review and feedback, which requires:
    • Identifying and recruiting reviewers for papers.
    • Moderating the reviews.
  • Technical support for authors and reviewers in using the journal’s software.

Why can’t we replace that?

On the surface, most of that seems like work that a crowdsourcing system could potentially handle. I certainly thought the reputation system could handle a lot of it.

But I didn’t fully understand exactly what was happening with two items in particular.

Identifying and recruiting reviewers for papers… and convincing them to do the review.

This is a huge piece of the labor of editorial teams.

Scholars are constantly operating at the edge of burn out, with work coming at them from all directions. They don’t actually have the time or bandwidth to do review and are barely squeezing it in. Because of this, they aren’t seeking opportunities to do review. Editors have to work hard to find reviewers who can and will find bandwidth to do a review for them, often leaning on personal relationships or the prestige of their journals.

Editors I’ve spoken to sometimes have to ask 20 people before they find 2 or 3 willing to do a review. And even then, it’s not uncommon for people to commit and drop out.

I definitely didn’t understand just how bad this was when I started out. I was hoping a crowdsourcing system could fix this by building review into the natural process of reading the literature. And it’s still unclear whether that would help, if a system that made review easy and natural became the default publishing system. But it presents a chicken and egg problem. Reviewers aren’t going to review on that system until it’s the default, and it can’t become the default with out the reviewers.

Moderating the reviews.

Journal editors are doing an enormous amount of moderation. And not just your standard internet discussion moderation. They’re doing a lot of a very specific kind of ego management.

Crowdsourced systems generally work as long as the average actor is a good actor.

A good actor in the context of academic publishing is someone who is willing to put aside their own ego in the pursuit of the best work possible. This is someone capable of recognizing when they’re wrong and letting their own ideas go. A good actor would see a paper that invalidated their work and be able to assess it purely on the merits.

It is unclear whether the average academic is a good actor in this sense. And its editors who keep that from tearing the whole system apart.

Most academics seem to intuitively suspect that there are enough bad actors in academia to make crowdsourcing non-viable. One of the biggest pieces of feedback I got was concerns about abuse, manipulation, gaming, or just plain old bad faith reviews from people with competing ideas.

If the average actor is a good actor, a well designed reputation system would be able to flush those behaviors out over time. If not, it breaks the system. This breaks just about any other attempt to crowdsource review as well.

Any attempt to replace the journals has to contend with these two issues and has to have a really solid, convincing answer for them. Peer-review.io doesn’t.

The Other Shoe

There’s another reason why outright replacing the journals with a crowd sourced system - or any other system - is unlikely to succeed.

Lock in.

Authors simply aren’t free to try other systems. Their career advancement is wholly dependent on publishing in a limited set of journals chosen by their departments. In some cases those are chosen by committees of their peers, in other cases by university administrators.

Authors are not going to risk publishing a worthwhile paper in a new system. And reviewers aren’t going to go anywhere authors aren’t.

I had hoped focusing on file drawered papers might provide an in. But it’s unclear just how much of an issue file drawered papers are. It seems likely many of the papers aren’t easily shareable. Many academics question whether their file drawered papers should be shared. Often, ones that they want to share have already been shared on preprint servers.

There’s very little room for movement in the system where authors and reviewers are concerned. And you can see that in the massive graveyard of attempts to build something different, to which peer-review.io is now being added.

Is there hope for change?

Crowdsourcing probably won’t work, but there is still hope. We just have to change how we think about the system.

There are various attempts to reform the journal structure itself, like eLife’s move to reviewed preprints with a no rejection model. There are attempts to overlay a journal-like structure on top of preprints, like Peer Community In. In some cases, previous attempts to crowdsource are gradually evolving back towards something that looks like a journal form, like PREreview which started out pure crowdsourced and is gradually evolving back towards journalish with their review communities.

It remains to be seen whether any of these attempts will be able to get enough traction to make systematic change in the long run, but they all have the potential to.

But there’s another path as well.

If we admit that the journal structure, defined as an editorial team manually organizing and moderating review, is unlikely to change and redefine our goal from “crowdsource review” to “escape the commercial publishers” and “enable experimentation with the journal structure”, then there’s a path with a ton of promise: flipping journals.

But that’s a story for another post.