Paper editing

From Bayesian Behavior Lab

Jump to: navigation, search

Contents

Paper editing for editors, reviewers and authors

This writeup is reflecting my own current thinking about editing, not that of PLOS or any other publisher. I wrote it very quickly, so it will only become meaningful after I get feedback from a couple of people. Feedback invited via email or to @kordinglab.

Outcomes of the overall edit process

The outcome of the review process usually falls into one of four categories:

Accept

The paper is both relevant enough and correct enough to be printed as submitted. The paper is now going to the production department.

Minor revisions

The paper is relevant enough and correct enough the be printed as submitted, but there are a number of simple things that should be corrected before final acceptance. The paper will, after a minor revision, be seen by the editor who will generally decide if the authors have been doing a decent job and will not send it out to the reviewers. Of course, if the authors are lazy and do not do the minor changes they should do the editor will rightly get angry at them and might treat it like the response to a major revisions outcome.

Major revisions

The paper could in principle be strong enough for the journal and with significant work be made sufficiently correct for publication. It might also be rejected if the asked for changes can not be done or the authors do not want to do them. It can also ultimately be rejected if as the result of clarifications in the process the reviewer/editor finds out that the paper is too insignificant or cannot be corrected. Major revisions almost invariably implies that additional experiments/analysis are necessary.

Reject

The paper is either too insignificant to be published in the journal, or the approach is sufficiently misguided that even extensive improvements can not get the paper above the publications bar.

The role of the reviewer: relevant?, correct?, and improvements

The reviewer provides three things.

  • They generally provide a recommendation of what the editor should do (see above). This is a summary of the review for the editor. The editor will be very aware of this high level recommendation and less so about the precise context of the written review.
  • They also provide a concrete list of reasons for their recommendation. Specifically they will comment on correctness and relevance (see below). It happens quite frequently that the reviewer believes that relevance is not for them to decide. This just makes the job of the editor harder (which I guess is fine) but more noisy (which isnt). I want to urge all reviewers to evaluate both for better journals. Often times, the editor's speciality is far enough from a paper and they rely on scientists in the field to say if the reviewed paper is relevant enough. Editor's and authors also need help evaluating correctness. For example, many of my papers had much better reference lists and better control experiments after the review process.
  • They provide a list of concrete approaches of improving the manuscript.

How Editors make decisions based on reviewer comments

  • First off, one has to read between the lines to ask if the paper really is strong enough for the journal that one is editing for.
  • If all reviewer recommends accept or minor revisions then editors will generally not send back the paper to reviewers. Obviously, if the editor has major doubts about the correctness then this principle may be changed but this is the default.
  • If at least one of the reviewers recommends major revisions or reject then the paper will be sent back, either to the one reviewers or to all of them. Personally I usually only break this rule if I really feel that a reviewer is not fair (e.g. the famous "this has all been done before" without evidence review).
  • If reviewers indicate with convincing evidence that the paper is not relevant enough or has errors that can not be overcome the the paper will be rejected. Relevance criteria are soft and there are no alternatives. When editing for PLOS CB my relevance criterion is "if one of my students wanted to work in that general field, would I recommend they read the paper?" For relevance I read between the lines of all the reviewers. For errors the situation is different. If one reviewer shows a real mistake in the logic of the manuscript, then they overrule the rest. Basically, relevance benefits from democracy, correctness does not.

Who should review a paper

  • For relevance having someone with a good overview of the wider field is superuseful. Relevance evaluations can be spot on and yet very short. Authors hate it. But the "the only thing added over paper xxx is yyy and that combination is not overly surprising" are actually very indicative to editors and I believe rightfully so.
  • For correctness having someone who is meticulous is important. If a paper uses complicated methods (stats/experimental) then someone who knows those techniques is also important. Good correctness evaluations are usually quite long.
  • Ideal reviewer pools combine people that are good at evaluating relevance with those that are good at evaluating correctness. I find that young people are better at evaluating correctness.
  • Obviously, a reviewer should have no conflicts of interest. However, in practice this is often hard to evaluate but I find that reviewers are usually very forthcoming with reviews. And yes, editors are not generally aware of potential joint grants/papers but often rely on the honesty of the reviewers.

Correctness versus relevance

Correctness

For me, the probability of a false positive is important, but also the conceptual correctness. Basically correctness is the level to which a paper can convince the reader of the central claim of a paper. Major comments from reviewers usually attack correctness.

Relevance

Relevance is affected both by the novelty of the contribution and by the importance. Basically relevance is affected by who cares and by how much.

This page was last modified on 5 November 2013, at 17:04. This page has been accessed 387,848 times.