Incentives and research software

Happily, we now have open access to most new academic publications and publicly accessible data and software are coming soon. However, I don’t think openness alone is sufficient to assure reliable, reproducible research. I believe we need to change what counts as a credit worthy output in academia.

For many academics the major metric of success is publishing frequency. A number of authors including Daniel Sarewitz and Marc A. Edwards & Siddhartha Roy have recently examined the wider implications of this for research.  Here, I’m going to concentrate on research software.

Favouring publishing frequency and novelty over rigour leads to code which produces any plausible, explainable, result in as short a time as possible. Sustainability, reproducibility and robustness as the foundation for building knowledge are all too easily neglected.  Often code is not fit for human consumption, instead it is an ephemeral love letter from researcher to hardware [1]. The affair is brief and the correspondence disposable.

Unpaid reviewers, who themselves are rushing to get their own next publication out, have little incentive to thoroughly review a paper’s foundational code, even when it is freely available for scrutiny. Such an approach is ill-suited to ensuring the quality of whatever paper is based on the code’s output.

Of course this is not the only, or even the worst, consequence of current academic incentives. The reproducibility crisisp-hacking, and a spate of errors and retractions all point towards the need for a cultural shift in research. However, with something like 70 % of research being impossible without software, I’d suggest that improving software quality is a good place to start.

Academics are already a little way down this road. The OECDresearch councils, universities and others recognise that, in the wake of open access publishing, open research data is the next step.  The step after, when so little data can be made sense of without software, is open research code. If the public purse has funded the writing of a piece of software, shouldn’t it be treated as a public good?

However, openness in and of itself is not a panacea. The heartbleed bug that put a security flaw in almost 20% of the world’s websites went unnoticed whilst out in the open for nearly 2 years.

I believe we must find ways to credit both the production and the review of research software. The reviewers of journal articles should be credited too. This is easy to say, but the how and who pays is difficult.

Two obvious places to start would be: stack exchange style gamification, or simply the payment of money. In the spirit of research I suggest we try a bunch of different things and see what works best.

  1. attributed to Michael Marcotty in Steve McConnell’s Code Complete

Leave a comment