Can We Help Scholarly Journal Editorial Teams Escape the Commercial Publishers?

posted on Mar 12th, 2024

This post is the third in a series about reforming academic publishing:

  1. Crowd Sourced Review Probably Can’t Replace the Journals
  2. Why isn’t Preprint Review Being Adopted?
  3. How Can We Help Journal Editorial Teams Escape the Commercial Publishers?

For two years I’ve been working to build a non-commercial scholarly commons. I first explored crowdsourcing review using a reputation system. About a year ago, I concluded that wasn’t going to work and I recently wrote up my reasoning in this post.

In late February, participants in the Recognizing Preprint Peer Review workshop posted a paper outlining a vision for preprint review. They examined how much progress we’ve made in the last 6 years, and it isn’t much. In my last post, I analyzed why.

Preprint review efforts suffer from the same impediments as crowdsourced review. Scholars simply do not have the time and they’re locked into the journals.

If we want scholars to adopt preprint review, or any other publishing experiment, we have to make it as frictionless as possible. We need to build it directly into scholar’s journal publishing workflow.

But the commercial publishers aren’t interested in reform. So can we bring the journals, and their authors and reviewers, to us?

An image of many hallways with exit signs.

Many Journals are Scholar Run

The journals are not the commercial publishers. The journals are their editorial teams and communities of scholars.

To get a sense of how many are entirely scholar run, I ran an unscientific survey. I looked at every Taylor and Francis journal in the Bioscience category (~300 journals) and assessed their editorial team. [1]

I found that ~53% of the journals I examined had no professional editor of any kind. Of the ones that did, it was often a grad student or research software engineer’s second job.

I spot checked Wiley, Sage, Elsevier, and Springer Nature using the same methodology. Sage, Elsevier, and Springer Nature appeared to show similar patterns, with most journals being scholar run. Wiley was the exception, consistently associating professional editors with its journals.

This suggests that there are a huge number of scholar run journals.

There are already a number of open source journal platforms. Why aren’t more editorial teams leaving the publishers to run themselves using these open source platforms?

It’s All About User Experience

User experience (UX) refers to the experience the user has while using a set of services or a product. In a non-software context, it is determined by interpersonal interactions, business processes, or a physical good. In a software or digital infrastructure context, the largest contributor to the user experience is the software’s interface.

In interface design, we have the concept of “friction”. Friction is anything that slows a user down: clicking, typing, thinking, or waiting. The best software user experiences are created by minimizing friction.

When building new software, you have to be aware of the market you’re building into. If there is no existing solution, then anything reasonably effective is likely to be adopted. But if there is an existing solution, then you can’t just match the existing experience. You have to substantially improve on it.

With the open source publishing platforms, we’ve seen them adopted by journal teams that have no where else to go. Editorial teams that can’t or won’t work with the publishers. But they haven’t been successful flipping journals. The user experience isn’t enough of an improvement on what the commercial publishers are offering.

The software is only part of the equation. Most editorial teams are given money to run their journals which many use to pay themselves a stipend. The publishers provide copy-editing, production, and marketing in addition to the software platforms.

We know user experience can make the difference. Negative user experience is the primary driver of the trickle of editorial defections. Can we build a user experience good enough to attract them?

How do we turn the trickle into a torrent?

Robert Maxwell established modern corporate publishing using a relentless focus on the user experience of editors. We need to invert what Maxwell did.

I’ve spent the last year conducting user research with editorial teams to learn how to flip them. As a starting point, I’ve identified a number of needs, most currently unmet by the open source platforms.

Save editors time.

Time is scholar’s most precious resource.

Neither Scholar One nor Editorial Manager are intuitive or low friction. Editors spend a lot of time providing technical support to their authors and reviewers. Sometimes going so far as to manually enter submissions because the authors or reviewers couldn’t figure out how to do it.

The open source systems are only marginal improvements. People working at library publishing programs shared similar stories of struggle.

Building a low friction, intuitive interface will save everyone involved time.

Handling editors often spend a significant chunk of their time googling for reviewers. Accurate reviewer recommendations would save them that time.

Many editors do a lot of their work outside of their systems. They communicate over email or Slack. They track their work in spreadsheets.

We know how to build powerful and flexible workflow and communication tooling. Having that tooling directly in their publishing platform would save editors additional time and cognitive load.

If we can save editors enough time, it could make the difference in their ability to adopt Diamond business models. I spoke with one editor who told me flatly “I wouldn’t do this job for free.” I asked her how long it takes her per week. “It takes 5 to 10 hours a week.” Is there a point at which she would do it on a volunteer basis? “Two to three hours a week.”

Focus on the community.

Journals are communities. They were 16th century social networks that have evolved to provide other services. The adoption of ResearchGate, Academia.edu, and Academic Twitter shows there’s a desire for a modern academic social network. Building publishing on top of a social network (think Github, not Facebook) would allow scholars to more easily find each other, communicate, and collaborate.

As it stands, authors have to re-enter their information into system after system. Editors have to build their own databases of potential reviewers, often from inaccurate data. Building publishing on top of a social network means authors and reviewers can enter their information once and editors will have access to it. This allows us to drastically simplify submission processes and help editors find reviewers more easily.

Many editors expressed a desire to do more to build community around their journals. We could provide a myriad of community building tools: discussions, Q&A, chat, documentation, and more.

Provide an Open Impact factor.

For better or worse (worse), Impact Factor is the standard metric by which journals are assessed. Leaving the commercial publishers usually means leaving the title, and its associated impact factor, behind.

If we want to give ourselves the best chance to de-commercialize publishing, we need to provide something that can make that easier to stomach.

We know how impact factor is calculated, meaning we can calculate an “Open Impact Factor” based on open sources. When whole editorial teams defect, we can count the old journal’s impact factor in the new journal’s OIF to give some continuity.

This won’t help us for the first flips, but if the scholarly community adopts the OIF, that would go a long way to enabling escape.

Replace publisher services.

Publishers are still providing copy editing, production, and marketing to all their journals. I did speak to some editorial teams that would be willing to forgo those services if it meant escaping the publishers to a better experience running their review processes, but other teams needed them. If we want to achieve universal non-commercial publishing, we need a replacement for them.

I don’t think it’s a good idea for the organization running the platform to directly provide those services. The goal is to build a commons to enable scholars to run their own journals, not a new publisher.

For copy editing and production, a better approach would be to automate them, provide inuitive tooling, or allow others to provide them through open marketplaces.

Marketing in this context is matching papers with interested readers. We can make marketing unnecessary by building powerful discovery tooling, ensuring that readers can always find the papers they’re interested in.

Democratic, not Decentralized

There is a push towards decentralized or federated systems, with good reason. But if we want to be successful, then we need to build this platform as a centralized system.

Mastodon still counts its total users in the low ten millions, and its average daily users in the low single digit millions.

We haven’t figured out how to build decentralized systems with low enough friction for the average user. Picking an instance is too much for most users and discovery remains an unsolved problem. We need to build a system that is easy and intuitive for any user, we can’t afford an architecture that introduces a bunch of friction at the outset.

The push for decentralization is really about the capital ownership of centralized systems. Capital ownership walls them off and inevitably enshitifies them.

But non-capital driven centralized systems don’t have to be walled gardens. We can open the source code for transparency (and contribution), open the data so that it can be reused, archived, and backed up, and open the API so that people can build on top of the system. We can enabled trusted partner institutions to run read-only mirrors.

Instead of decentralization, we need democracy.

We need non-profits that are governed by their users through directly democratic processes. We can write by-laws that require major decisions be ratified by a vote of the user base. Once there’s a reasonably sized user base, this would protect the system from take over by corporate interests.

Mirroring, open data, and open source provide an additional fail safe protection against take over, allowing the platform to be forked and carried forward in a new organization as a last resort.

Escape and Evolution

We can use flipping journals to bring the scholarly community to the platform, and then we can use the platform to make it low friction to participate in publishing experiments. In this way, we can help publishing evolve.

There’s a lot more to cover. Just building a platform isn’t enough. We need go to the journal editorial teams, convince them to leave the commercial publishers, and help them make the transition.

Some journal teams may be able to run very low cost to go Diamond. But others will need some way to fund their operations. Plus, we need to support the team building and maintaining the platform. We need to think about funding schemes, both for the platform and for those using it to organize publishing.

There’s still a lot more work to do. But I haven’t just been doing user research for the last year, I’ve been building an alpha.

That, however, is a story for another post.


  1. If any of the editors had the titles “Administrative Editor”, “Managing Editor”, or “Acquisitions Editor”, then I marked that as a professional editor. If I could identify any of the editors on LinkedIn and they listed T&F as an employer, I marked that as a professional editor. If the journal had a society affiliation, then I assumed professional editorial help through the society and marked it a “maybe”. I examined over 300 journals this way.

Why isn't Preprint Review Being Adopted?

posted on Mar 6th, 2024

This post is the second in a series about reforming academic publishing:

  1. Crowdsourced Review Probably Can’t Replace the Journals
  2. Why isn’t Preprint Review Being Adopted?
  3. How Can We Help Journal Editorial Teams Escape the Commercial Publishers?

Participants in the Recognizing Preprint Peer Review workshop posted a paper in which they outlined a vision for increasing peer review of public preprints. They, also, examined what progress has been made towards realizing that vision.

The paper shared data on the uptake in preprint review across 13 services. Some of the services listed use crowdsourced review, but most take a journal-like form with review organized and moderated by a team of editors.

You can find the raw numbers linked in one of the paper’s citations:

Year Articles Reviewed Growth
2017 23
2018 53 130%
2019 317 498%
2020 875 176%
2021 1698 94%
2022 2704 59%
2023 3144 16%

After 6 years, only a few thousand articles are being reviewed per year across 13 different services. And it looks like growth is stalling.

Compare that to Arxiv’s adoption:

Arxiv's adoption.

After 6 years there were already 20,000 articles a year being shared through Arxiv - a single preprint service.

So why is the adoption of preprint review so much slower than the adoption of preprints themselves? And why does it appear to be stalling?

What’s blocking adoption?

All of these services suffer from two of the three impediments that prevent the adoption of crowdsourced review:

Scholars have no bandwidth for review and scholars are not free to leave the journals. The ones that crowdsource also suffer from fears of bad actors in an unmoderated context.

Scholars have no bandwidth for review.

They find time to review for traditional journals because it counts towards their service work and journal editors lean on personal relationships or the prestige of their journals, with an implied exchange of benefit. Journals are, also, communities and people will go above and beyond for their communities. Even with all of that, editors are really struggling to recruit reviewers.

Scholars are not free to leave the journals.

Scholars are required by their tenure and promotion committees to publish in a limited set of journals. They aren’t really free to go outside of their journals.

…so how did preprints get traction?

Preprints were able to get traction because they didn’t compete with journal publishing. There was a direct benefit to scholars from sharing their work through a preprinting service: they got it in the hands of their peers and community that much sooner. The time it took was low compared to the benefit. And they could still go on to publish it in a journal.

Preprint review does compete with journal review. It competes for scholar’s most precious resource – their time.

It doesn’t offer the same benefits as journal review. It’s not clear preprint review counts towards service work and the preprint review services don’t have prestige to lean on. They can’t offer an implied exchange of benefits. And they haven’t built significant communities yet (though many are working very hard to do just that).

Preprint review services have to ask academics to sacrfice their most precious resource in return for the promise of a better future.

We can’t afford to introduce additional barriers to adoption. But many of the preprint review services have done just that through their design choices.

Making it Harder than it Already Is

In user interface design, we have the concept of friction. Friction is anything that slows the user down in accomplishing their task: clicks, typing, thinking, or waiting. Some friction is unavoidable, but good system design always seeks to drive friction down to the absolute minimum possible.

Many of these services have introduced friction into the process of reviewing or seeking review for a preprint through their design choices. They chose to build new platforms to run preprint review and layer them on top of the existing preprint platforms, meaning adopters now have to contend with two entirely separate platforms.

Some of the platforms ask reviewers to select a preprint on a preprint platform, bring it to the review platform, and review it. Others ask authors to submit to an existing preprint platform before submitting to the review platform to have their paper reviewed.

To adopt preprint review through one of these platforms, users must:

  1. Become aware of the existence of a platform.
  2. Visit the platform.
  3. Understand what the platform is and what it is asking of them.
  4. Learn how to use the platform to give a review or submit a preprint for review.
  5. Find the preprint they want to review, or submit the preprint they want reviewed, on another service. Meaning they have to:
    1. Discover the preprint platform(s) for their discipline. [1]
    2. Learn how to use the preprint platform.
    3. Find the preprint they want to review. Or submit the preprint they want reviewed and wait for it to be accepted.
  6. Learn how to submit that preprint to the review platform.
  7. If they’re the reviewer, execute the review. If they’re an author, they have to work through a journal-like review process.

At any point, we could lose a user. Each step in the journey adds friction and increases that risk substantially.

Some of these steps are unavoidable when you’re trying to build a new platform (eg. 1,2,3,4). Others we have direct control over. Step 5 adds a ton of friction not only to the initial adoption, but to each subsequent review and submission. COAR Notify helps with step 1 assuming the user is already on a preprint platform that has adopted it, but not the rest of the process.

By adopting these designs, we’ve made an already uphill climb that much steeper.

What can we do differently?

We need to make adopting preprint review as frictionless as possible. We need people to be able to do it with one click and for the option to do it to be right in front of them where they already are. We need to integrate preprint review directly scholar’s existing workflows. [2: eLife]

That means it needs to be a core part of the existing preprint platforms. Even better would be to integrate preprinting and preprint review directly into the traditional publishing flows and platforms: eg. Scholar One or Editorial Manager.

I’m assuming the folks working on preprint review tried to integrate directly into the *rxiv diaspora, and weren’t able to. It wouldn’t be surprising if the preprint services were wary of implementing review for fear the journals would view it as direct competition and start refusing to accept preprinted submissions.

It also seems highly unlikely that the commerical publishers are going to integrate preprinting and preprint review into Editorial Manager or Scholar One.

So are we stuck with high friction approaches? No. There’s a third option.

We can build a new platform, with preprinting and preprint review integrated directly into the publishing flow, and bring the journals to that platform.

The commercial publishers and the journals are not the same thing. The journals are their editorial teams and communities. Many (most?) of the editorial teams are composed of scholars. We’ve seen communities follow editorial teams when the editorial teams leave the commercial publishers.

If we could convince the editorial teams to run their journals on our new platform, they would bring their authors and reviewers with them. That would allow us to build preprinting and preprint review directly into scholar’s primary publishing workflow. We could make it one click and achieve the absolute minimum friction possible, which would give us the best chance of adoption.

So is that possible? Yes. But that’s a story for another post.


  1. Many of these services chose this architecture to take advantage of the buy-in that existing preprint servers have. They were hoping that this would eliminate steps 5.1 and 5.2. For those aware of the preprint platforms, that works. But step 5.3 still adds substantial, repeated friction to the workflow. And preprinting itself is still working towards adoption. By some estimates 5 million scholarly articles are published a year, but only 10 - 20 million preprints total have been shared in the last 30 years (wikipedia).
  2. There’s one service that doesn’t fit well into this analysis: eLife. By taking a traditional journal and turning it into a preprint review service, they are very nearly building preprint review into scholars existing flows… …nearly. They aren’t asking users to use two separate platforms, but they are asking users to choose between reviewed preprint and traditional VOR at the outset. They got some serious backlash from their community for doing it. It’s still early in the experiment, so it remains to be seen how it will play out. But of all the current attempts, this one removes the most barriers to adoption.

Crowdsourced Review Probably Can't Replace the Journals

posted on Feb 12th, 2024

This post is the first in a series about reforming academic publishing:

  1. Crowdsourced Review Probably Can’t Replace the Journals
  2. Why isn’t Preprint Review Being Adopted?
  3. How Can We Help Journal Editorial Teams Escape the Commercial Publishers?

Two years ago, I started a journey into academic publishing. I imagined using a reputation system to replace the journals with crowdsourcing. The reputation system would match reviewers to papers, incentivize good faith review, and identify bad actors. It wasn’t clear whether it would work in practice, but I wanted to find out.

I spent a year doing user research, building a beta (peer-review.io), and working to get people to give it a shot.

I am now convinced that it’s not going to work.

While some of the reasons are specific to the reputation based approach, most impact any attempt to crowdsource peer review. There’s a brick wall standing in the way of any attempt to move academic publishing outside of the journals.

A photo of stacked journals.

What is crowdsourcing in an academic review context?

Before we dive into the reasons why crowdsourcing probably won’t work, let’s get some definitions in place.

In traditional academic publishing, the journal editors act as the organizers, moderators, and facilitators.

Crowdsourcing is any system where technology provides everything needed for scholars to self-organize. It is any system of academic review where the reviewers self-select and self-organize using technology without needing a faciliator, organizer, or editor.

What is it that we’re trying to replace with crowdsourcing?

Journal’s editorial teams are doing quite a bit of manual labor, some of which is very difficult to replace with technology.

The work of journal editorial teams includes:

  • Filtering spam and amateur work.
  • Facilitating multiple rounds of review and feedback, which requires:
    • Identifying and recruiting reviewers for papers.
    • Moderating the reviews.
  • Technical support for authors and reviewers in using the journal’s software.

Why can’t we replace that?

On the surface, most of that seems like work that a crowdsourcing system could potentially handle. I certainly thought the reputation system could handle a lot of it.

But I didn’t fully understand exactly what was happening with two items in particular.

Identifying and recruiting reviewers for papers… and convincing them to do the review.

This is a huge piece of the labor of editorial teams.

Scholars are constantly operating at the edge of burn out, with work coming at them from all directions. They don’t actually have the time or bandwidth to do review and are barely squeezing it in. Because of this, they aren’t seeking opportunities to do review. Editors have to work hard to find reviewers who can and will find bandwidth to do a review for them, often leaning on personal relationships or the prestige of their journals.

Editors I’ve spoken to sometimes have to ask 20 people before they find 2 or 3 willing to do a review. And even then, it’s not uncommon for people to commit and drop out.

I definitely didn’t understand just how bad this was when I started out. I was hoping a crowdsourcing system could fix this by building review into the natural process of reading the literature. And it’s still unclear whether that would help, if a system that made review easy and natural became the default publishing system. But it presents a chicken and egg problem. Reviewers aren’t going to review on that system until it’s the default, and it can’t become the default with out the reviewers.

Moderating the reviews.

Journal editors are doing an enormous amount of moderation. And not just your standard internet discussion moderation. They’re doing a lot of a very specific kind of ego management.

Crowdsourced systems generally work as long as the average actor is a good actor.

A good actor in the context of academic publishing is someone who is willing to put aside their own ego in the pursuit of the best work possible. This is someone capable of recognizing when they’re wrong and letting their own ideas go. A good actor would see a paper that invalidated their work and be able to assess it purely on the merits.

It is unclear whether the average academic is a good actor in this sense. And its editors who keep that from tearing the whole system apart.

Most academics seem to intuitively suspect that there are enough bad actors in academia to make crowdsourcing non-viable. One of the biggest pieces of feedback I got was concerns about abuse, manipulation, gaming, or just plain old bad faith reviews from people with competing ideas.

If the average actor is a good actor, a well designed reputation system would be able to flush those behaviors out over time. If not, it breaks the system. This breaks just about any other attempt to crowdsource review as well.

Any attempt to replace the journals has to contend with these two issues and has to have a really solid, convincing answer for them. Peer-review.io doesn’t.

The Other Shoe

There’s another reason why outright replacing the journals with a crowd sourced system - or any other system - is unlikely to succeed.

Lock in.

Authors simply aren’t free to try other systems. Their career advancement is wholly dependent on publishing in a limited set of journals chosen by their departments. In some cases those are chosen by committees of their peers, in other cases by university administrators.

Authors are not going to risk publishing a worthwhile paper in a new system. And reviewers aren’t going to go anywhere authors aren’t.

I had hoped focusing on file drawered papers might provide an in. But it’s unclear just how much of an issue file drawered papers are. It seems likely many of the papers aren’t easily shareable. Many academics question whether their file drawered papers should be shared. Often, ones that they want to share have already been shared on preprint servers.

There’s very little room for movement in the system where authors and reviewers are concerned. And you can see that in the massive graveyard of attempts to build something different, to which peer-review.io is now being added.

Is there hope for change?

Crowdsourcing probably won’t work, but there is still hope. We just have to change how we think about the system.

There are various attempts to reform the journal structure itself, like eLife’s move to reviewed preprints with a no rejection model. There are attempts to overlay a journal-like structure on top of preprints, like Peer Community In. In some cases, previous attempts to crowdsource are gradually evolving back towards something that looks like a journal form, like PREreview which started out pure crowdsourced and is gradually evolving back towards journalish with their review communities.

It remains to be seen whether any of these attempts will be able to get enough traction to make systematic change in the long run, but they all have the potential to.

But there’s another path as well.

If we admit that the journal structure, defined as an editorial team manually organizing and moderating review, is unlikely to change and redefine our goal from “crowdsource review” to “escape the commercial publishers” and “enable experimentation with the journal structure”, then there’s a path with a ton of promise: flipping journals.

But that’s a story for another post.

City Elections 2023 - Housing and Land Use Policy Guide (Affordability)

posted on Mar 12th, 2023

It’s city election time in Bloomington, Indiana. The candidates have all filed and the race is underway. This is the first in a series of posts covering city policy.

There are four major policy areas that tend to come up in municipal policy discussions. They are:

  • housing and land use
  • transportation
  • public safety and policing
  • economic development

There are other things that the city has jurisdiction over that don’t fit neatly into those categories (parks, utilities, etc). There are very important issues that represent cross cutting concerns - things like climate response and antiracism touch on most of those policy areas. Solutions to things like homelessness often touch on several.

But those categories tend to be the most significant areas of city policy.

I’m going to start with housing and land use. There are two major questions we need to address: “How do we make housing affordable and available to everyone?” and “What kind of built environment should we live in?” I’ll address the first in this post and (hopefully) deal with the second in a future post.

Arial shot of house rooftops - Photo by Breno Assis on Unsplash

How do we make housing affordable and available to everyone?

We have a pretty good idea how to improve affordability. It’s just not easy to do. In the case of Bloomington, a lot of the strategies that have been effective elsewhere have been banned by the state government in Indianapolis.

We Have to Build More

First off, we need to build a lot more housing. There are 30,000 people who live in the surrounding counties and commute into the city every day. While some of those people wouldn’t choose to live in the city, a fair number of them were simply priced out.

The city released a housing study in 2020. The study referenced two vacancy rates, the American Communities Survey run by the Census Bureau reported a 9% vacancy rate for Bloomington. The city ran its own survey that found a less than 2% vacancy rate. The ACS doesn’t account for seasonal variation in vacancy that comes with being a college town, but the city’s study involved asking landlords to self report their vacancy rates. Make of that what you will. My guess is the true number is somewhere in between.

Even if the vacancy rate is 9%, that’s not enough vacant housing to house even a third of the commuters. That vacancy rate is definitely not high enough to push landlords to lower rents.

When developers build a new project they do the financial math with an assumed vacancy rate based on the market they’re building into. It’s often between 5% and 10%. To get rents to go down, we need the vacancy rate to be significantly higher than the one they assumed and built in.

So we need to build a lot of housing. This is necessary to achieve housing affordability, but not sufficient.

Building housing is slow. When you look around town, it may seem like we’re building a lot. But when you count the beds, it comes out to surprisingly little. The biggest of the megahousing projects we’ve built lately was around 1000 beds. Most “big” apartment projects are a few hundred beds. Many are less than 100. They often take several years to build.

Put it all together and we’re only adding a few thousand new housing units a year. Many years we add less than a thousand. We only started building at that rate a few years ago. So it’s going to take us years to catch up.

When the vacancy rates do finally rise, the rents will begin to fall. We’re already seeing this in other cities that are ahead of us on building.

But we can’t count on that for permanent affordability. When rents start to fall, the developers will eventually slow down or stop building. Vacancy pressure is necessary, but not sufficient.

We’re Limited by State Government

It’s important to make note of what we can’t do here. We can’t do rent control, it’s banned at the state level. We can’t do inclusionary zoning, where we require developers include a certain percentage of permanently affordable housing in all projects, that’s also banned at the state level.

We can do density bonusing, where we allow developers extra density (another floor) in exchange for holding some percentage of units permanently affordable. But that only helps in so far as developers are building.

So we need to put other pressure on the landlords to lower rents. There are a lot of things we could explore to do this.

Renter’s Bill of Rights

A Renter’s Bill of Rights (like this example) with things like requiring cause for eviction, fee limitations, and right of first refusal would help.

Once again, the state government is standing in our way. We have a rental inspection program that allows us to do some of this. But it’s grandfathered in. New rental inspection programs are banned at the state level. Past city governments have been very afraid to try new things with the grandfathered program for fear the state would decide they invalidated the grandfathering and kill it.

It’s unclear what would happen if we attempted to implement a Renter’s Bill of Rights independent of the rental inspection program, or whether we even could under state code.

At the very least, we can properly fund HAND - right now they don’t have enough resources to keep up with inspections.

Vacant Rental Fee

We could also explore a vacant rental fee: where we only allow a landlord to hold a rental vacant for a certain time period (6 months or a year) before we start charging a monthly fee. The fee has to be high enough that it actually causes pain and incentivizes the landlord to lower rents to find a renter. Some cities in Europe have been exploring this. It’s likely it would be banned at the state level as soon as we try it, but it’s still worth trying.

Finally, we need to explore and support alternative housing development and ownership structures. Public housing is the obvious one and we need to build more of it. But there are other forms as well.

Housing Cooperatives

Cooperatively owned housing allows tenants to collectively govern their own rental housing. We have a housing cooperative already in Bloomington Cooperative Living. The cooperative owns three properties and leases two more. It’s structured as co-living, with members renting a room and sharing living areas, but housing cooperatives don’t have to be co-living. BCL manages to get rents down to $400 - $600 a month, which includes utilities and food.

BCL has been steadily growing and the city should invest in its continued growth. It’s also worth putting effort into forming additional cooperatives, because diversity is good. It’ll take a while, but if we can eventually reach a point where a significant chunk of the rental market is cooperatively owned, that would put real pressure on the landlords.

Community Land Trusts

Cooperatives work best on rental housing, but what about keeping owner occupied housing affordable? For that we need a Community Land Trust.

Community Land Trusts are non-profits that can be democratically run by their members, but don’t have to be. A community land trust works by owning the land owner-occupied housing is built on and giving the housing owner a 99 year ground lease. This allows the occupant to own the building, get a traditional mortgage, accrue equity, but it allows the land trust to dictate how much the value of the property can rise. It effectively removes housing from the normal housing market and gives it a set rate of appreciation.

Community Land Trusts are very effective at keeping gentrification at bay, as long as local governments recognize them and support them. Home owners who are worried about being priced out of their homes - which usually happens when their property taxes rise with the value of their property beyond what they can pay - can put their homes in the land trust and then that prevents the price from going up, and thus the taxes from going up.

Activists in Bloomington recently formed a land trust. The city should support it financially, and work with the county to ensure the land trust is accounted for when calculating property values and taxes.

This is just examining the affordability aspect of housing and land use policy. There are many other things to consider: the sustainability of the built environment, histories of racial exclusion and how we make amends, and what we do for those who struggle to stay in housing for reasons other than the cost alone.

I’ll try cover all those things in future posts.

But these give you a pretty good idea what to look for on the issue of housing affordability. Almost all of the candidates will give lip service the issue of housing affordability at some point. The good ones know what it takes the create it. The bad ones will talk the talk, but when it comes to following through, they’ll balk.

Peer Review Reaches Alpha

posted on Aug 5th, 2022

Peer Review now exists as an alpha.

Screenshot of Scientific Publishing Web Platform

I’m looking for a couple of things:

  • I need feedback on the concept. Is this a good idea? Am I heading in the right direction? I’ve explained it in detail in the post below.
  • I’m looking for academics who are interested in exploring the alpha and giving me feedback on all aspects of it.
  • I’m looking for people who are interested in signing up for the closed and open betas, early adopters who can form the core of the initial community.

And, if you do think I’m heading in the right direction with this, I’m asking for donations to support the development work financially and extend the runway.

I’ve made a ton of progress in the last month, but there’s still a lot of work left to do! I think I’m a few months out from being able to begin a closed beta. I’m really excited by the idea and the platform’s potential. I’m eager to hear the thoughts of everyone involved in academic publishing!

I wrote up a post on the Peer Review blog explaining what the platform is and how it works, as well as linking to a feedback and beta sign up form. Here’s the link: A Possible Fix for Scientific and Academic Publishing

Please share far and wide!