I’ve learnt my lesson for the day: even if a reviewer is just plain wrong, what you do in response can still improve the paper. And maybe even show you something new about your own data.
I’m revising a paper about my older work with plasmids. One of the reviewers, a theoretician (they wrote their review in TeX), thinks the paper “is really lacking a serious mathematical and statistical modeling effort”. And here I was happy to finally have a paper with no math in it! They don’t think it’s clear that my data support the conclusions I make, though it seems obvious enough to me. Plus, they want me to use a specific modelling method I think is seriously questionable.
At first I was upset at having to spend a bunch of time and effort responding to reviewer comments that were wrong and weren’t going to improve the paper. But then, in the process of doing some math to address a question from the other (more sensible) reviewer, I realized I could extend the math and show, quantitatively, how competing evolutionary hypotheses make different predictions about what should happen in my experiments. In the end, not only am I able to show that my data reject one hypothesis but are consistent with another, but I’m also able to explain the specific shape of my data—something I’d never even attempted to do. I’m actually surprised it fits so well.
So there you go. Score another one for peer review. It’s even better than the peers doing the reviewing.