Pear and Pecan Muffins

Yesterday we received  our first veg box delivery from Phantassie.  Nestled among such gems as romenseco and jerusalem artichokes were a pair of ripe pears ready to stave off my despair at America’s new trumped up democracy.  Is there a better way of hiding despair than under a pear muffin?


2 large ripe bananas
1 large pear (comice)
1/4 cup of soy milk
1 tablespoon peanut butter
1 tablespoon ground flaxseed
2 teaspoons vanilla essence
1.5 cups wholewheat flour
1 teaspoon baking powder
1/4 cup golden caster sugar
3/4 cup chopped pecans
3/4 cup chopped pear


Preheat oven to 180 deg C.

Combine bananas, large pear, milk, peanut butter, flaxseed and vanilla extract in a food processor.

In a separate bowl, sieve flour and baking powder and add sugar.  Mix to combine. Mix in the wet ingredients until just incorporated.  Fold in pear and pecans.

Fill muffin cases and bake for 15-18 mins. (Tops should lightly brown and a skewer should come out clean).

Place on a cooling rack.

Enjoy with a nice cup of tea whilst pondering the breakdown of society. (Other pondering options available at your discretion).


Incentives and research software

Happily, we now have open access to most new academic publications and publicly accessible data and software are coming soon. However, I don’t think openness alone is sufficient to assure reliable, reproducible research. I believe we need to change what counts as a credit worthy output in academia.

For many academics the major metric of success is publishing frequency. A number of authors including Daniel Sarewitz and Marc A. Edwards & Siddhartha Roy have recently examined the wider implications of this for research.  Here, I’m going to concentrate on research software.

Favouring publishing frequency and novelty over rigour leads to code which produces any plausible, explainable, result in as short a time as possible. Sustainability, reproducibility and robustness as the foundation for building knowledge are all too easily neglected.  Often code is not fit for human consumption, instead it is an ephemeral love letter from researcher to hardware [1]. The affair is brief and the correspondence disposable.

Unpaid reviewers, who themselves are rushing to get their own next publication out, have little incentive to thoroughly review a paper’s foundational code, even when it is freely available for scrutiny. Such an approach is ill-suited to ensuring the quality of whatever paper is based on the code’s output.

Of course this is not the only, or even the worst, consequence of current academic incentives. The reproducibility crisisp-hacking, and a spate of errors and retractions all point towards the need for a cultural shift in research. However, with something like 70 % of research being impossible without software, I’d suggest that improving software quality is a good place to start.

Academics are already a little way down this road. The OECDresearch councils, universities and others recognise that, in the wake of open access publishing, open research data is the next step.  The step after, when so little data can be made sense of without software, is open research code. If the public purse has funded the writing of a piece of software, shouldn’t it be treated as a public good?

However, openness in and of itself is not a panacea. The heartbleed bug that put a security flaw in almost 20% of the world’s websites went unnoticed whilst out in the open for nearly 2 years.

I believe we must find ways to credit both the production and the review of research software. The reviewers of journal articles should be credited too. This is easy to say, but the how and who pays is difficult.

Two obvious places to start would be: stack exchange style gamification, or simply the payment of money. In the spirit of research I suggest we try a bunch of different things and see what works best.

  1. attributed to Michael Marcotty in Steve McConnell’s Code Complete