What is wrong with the scientific publishing system?

From Bayesian Behavior Lab

Jump to: navigation, search

Three factors characterize papers

I think what is wrong with the publishing system is different to what others complain about. From my perspective the main problem is that it confounds three things.

(1) Truth - each paper has a truth value associated with it. Is it significant? How p-hacked is it? Will it generalize?

(2) Innovativeness - to which level does a paper break new ground? Opens up new avenues?

(3) Relevance - who relates to the message of the paper?

Examples

The "profile" of a journal mixes these three things together and glamour journals heavily overweigh (3). But these points should really be evaluated separately. A few examples.

(1) I want my doctor to read cochrane reports not nature papers. On the truth axis they are far better than the entire rest of the biomedical literature.

(2) I want to read the innovative papers in my narrow field. For example, a paper that speeds up fitting of GLMs to data is superimportant to me. And the other 10 people building methods based on it.

(3) My students should read the easily readable papers in the broader field to have an overview.

A glamor journal is defined as a journal that is read by many people and hence cited by many people. So it goes for relevance. As it should - and it is good because it provides visibility for good introductory papers to the field.

The problem

The problem is that given the way the system is organized it is very hard to distinguish the huge number of mildly relevant papers that are bad on both truth and innovativeness from the small number of truthful, innovative papers that are just only relevant to a relatively small number of people. I do not see how post publication peer review or any of the other factors would really improve this problem.

Counting citations is just as bad, readability will almost always trump innovativeness.

One example of the problem is the following. Assume someone develops a new technique (very innovative) and writes a very math heavy paper about that (not relevant/readable to most). In a non-glamour journal. Now, a bunch of people build software packages to analyze brain signals. They write a few easily accessible articles in glamour journals which get lots of citations. The person who invented the method as well as the journal they published in will look far weaker than those that popularized the method.

Solutions

(1) Reviewers should, perhaps above all else, come up with an evaluation of truth. How solid is the finding? Would it generalize? Hey, let the reviewers put a p-value to replication and have them bet their own money on it ;)

(2) Reviewers should evaluate innovativeness while setting aside the relevance.

(3) Reviewers should describe in a machine readable format who should care about a paper. But, and this is important, we need algorithms that quickly estimate to whom a paper is relevant and only recommend it to exactly those people.

I think it would be nice to build a journal around these ideas.

This page was last modified on 31 December 2015, at 16:22. This page has been accessed 154,789 times.