Reader Comments

Post a new comment on this article

The perils of "publish first, curate second"

Posted by acrits-christoph on 18 Feb 2019 at 22:04 GMT

I enjoyed this piece. I think it suggests a lot of great improvements to tackling many of the current issues facing scientific publishing today. However, I am worried about an enormous issue in science that the authors fail to consider: the important role peer review and the pre-publication editorial process plays in the scientific literature in communicating to non-scientists (and even non field experts) what is and isn't pseudoscience. To tackle this, we need to consider that the future of scientific publishing must improve upon, as opposed to tear down, mechanisms of gate-keeping and clear communication from the scientific community on what is and is not within the realm of good science.

From the scientist's perspective, the worlds of pseudoscience feel distant and insignificant - as they should be, as the scientific community currently does a reasonable job of keeping it at bay in our institutions. However, to the non-scientist, the situation is exactly reversed - the world of pseudoscience is front and center on the internet and on TV and in daily life, often more abundant, dominant, and convincing than available accurate scientific content. Large fractions of the educated, developed world reject evolutionary biology, deny the findings of climate science, believe in non-existent dangers of public vaccination and so on - touching every aspect of the life sciences. As scientists we have a strong desire to dismiss these beliefs and the communities that have developed around them as irrelevant because they feel so irrelevant to the science we do - yet they manage to convince a large fraction of the public and influence politicians and public policy to great effect.

The first point that is often made is that this is primarily an error on the part of scientists - that we simply aren't reaching these people - that it is simply a matter of uneducation. This stems with unfamiliarity with the astounding breadth of pseudoscience. Pseudoscience is driven by multiple multi-million dollar industries consistently capable of producing content that appears scientific, intellectual, and wholly convincing to someone who is not a field expert. This content gets consumed by, and convinces millions. The organization Answers In Genesis is a creationist organization that makes upwards of 20 million years a year; publishes its own fake "creationist" journal with content that is surprisingly thorough and scientific-sounding. The Discovery Institute is an institute promoting intelligent design (revenues of $4 million a year) that consistently publishes mainstream "science" books that reject modern evolutionary biology, and lobbies extensively against the teaching of evolution in the United States. The climate denial blog Watts Up With That publishes in-depth articles (with seeming scientific rigor) attacking the science behind climate change and receives millions of views a month. The Heartland Institute (revenue of $5 million a year) produces publications and hosts large conferences attacking the science behind climate change every year.

The second point is to assume that as scientists, we are on equal footing here. Nothing could be further from the truth - pseudoscientific grifters are often far more gifted communicators and salespeople - their field selects for this (indeed, it is the only trait it selects for) much more than ours. They're usually backed by multi-million dollar organizations and funds specifically designated for the purposes of convincing people. And, convincing and communicating with the public is their full-time jobs, while it is just a fraction of the time spent for even the most publicly vocal active researchers. The only tools scientists have at their disposal is greater domain knowledge and the truth of the situation, which unfortunately are not guaranteed tickets to win debate as one might hope.

In social and media bubbles dominated by other scientists and high quality publications, and with PubMed and Google Scholar at our disposal instead of just Google, we rarely encounter this content. We rely on (a) the direct communicator of trust, the peer-reviewed scientific literature, and (b) indirectly rely on networks of trust - other scientists we know and scientific institutions - to evaluate the validity of scientific content that is not in our fields of expertise. Occasionally we can foray into studying an issue of another field, but it takes an enormous amount of time to satisfactorily evaluate content of another field, because science is hard.

Non-scientists do not have these informal networks of trust. And in fields that are one or two degrees away from their own, many scientists may not either. Many only have the scientific literature and the understanding that the work within it has been vetted by three active researchers in the field of study and the editorial crew with general domain knowledge, and that if other scientists were to find critical flaws within the work, respectable journals have mechanisms in place that allow a general scientific veto (the retraction) of excessively low quality work. It is important to promote the importance of this understanding, while also admit that it is sometimes fallible - but impressively statistically superior at identifying and communicating what is and isn't science as compared to the alternative non-scientific literature. Those working in scientific communication and those who actually engage with pseudoscientific communities will attest to to the importance of the rhetorical question "do you have a scientific citation for that?" in debate - it is in explaining that system of indirect trust to others (and keeping it in place) that we can use to empower people to distinguish science from pseudoscience.

What is the best alternative? Well, we only know what does not work: "open debate". We know this because it is the status quo today in the entire realm of publishing that the scientific community doesn't gate-keep: the internet, Twitter, blogs, magazines, and faux and spam-journals either designed for low quality science, or that conveniently profit off of them. And that this world fails to end up producing accurate scientific content - indeed, the casual reader is often unable to find a convincing consensus on what are some of the most robust and basic findings of science! On complex issues, the reader is often presented with a lengthy domain-specific back-and-forth, difficult to follow without spending hours of research if they are not a field expert - even as someone trained in the sciences. The truth here is that in these long "debates" we end up being influenced by the side with better rhetoric, that confirms our pre-existing biases, or that we apriori held a good deal of trust in. Internet debates also select for "winners" who are capable of endlessly writing more length rebuttals and responses as opposed to actually doing scientific work.

If "publish first, curate second" is a mantra with no qualifications, it removes the single greatest "feature" of the scientific literature - that it is scientific at all. The qualifications can be simple and synergize nicely with the suggestions in this piece - emphasize the scholarly importance of peer review, publish and cite optionally anonymous reviews, allow additional post-publication peer reviews - but also emphasize the importance of pre-publication, randomized peer reviews. While all forms of gate-keeping have an air of elitism that we young scientists and open access proponents find distasteful, it is the alternative, in which domain experts are only successfully able to communicate the distinction between high quality and low quality science to other domain experts, that is a self-defeating form of elitism.

No competing interests declared.

RE: The perils of "publish first, curate second"

bstern replied to acrits-christoph on 23 Feb 2019 at 21:39 GMT

Thank you for your comment. We argue in the abstract of our article ‘A proposal for the future of scientific publishing in the life sciences’ that the current ‘curate first, publish second’ approach to scientific publishing should be replaced with a ‘publish first, curate second’ approach. While the pithy expression ‘publish first, curate second’ encapsulates our ideas in a nutshell, it isn’t meant to imply that we favor a free-for-all system where anybody can publish anything as science or anybody’s comments carry equal weight. To the contrary, both the initial publishing on a preprint server and the subsequent peer review and curation steps need careful quality controls. For example, we say in the article that ‘The scientific community and publishers should define appropriate standards and quality controls that must be satisfied by all author-posted articles (including checks for plagiarism, data and/or image manipulation, author affiliations, data availability, violation of the law, etc.).’ A simple verification that the authors are affiliated with an accredited research institution, for example, would go a long way to prevent postings of pseudoscience. And for pseudoscience or poor science posted as a preprint by accredited scientists, the subsequent transparent peer review would flag and disqualify the work.

Crits-Christoph rightly points out that pseudoscience industries are powerful and able to reach millions through publications that appear scientifically rigorous. But why is it so easy for them to fake scientific appearance? We think it is because the scientific community itself relies overly on ‘appearance’ in judging what is and what isn’t rigorous science. It is the ‘appearance’ in certain journals that signals quality and importance, and not, as it should be, the underlying evaluation process which typically remains hidden from view. For the same reason, predatory open access journals can get away with pretending to execute peer review when all they do is post articles. If we don’t open up the scientific evaluation process, we open ourselves up to fakery.

How can we create more effective and transparent evaluation processes? In our article, we propose that the default should be that peer review reports are published and that peer review and curation services take better advantage of community input after publication. But this input doesn’t have to be organized in an ‘Amazon-style’ fashion, as Crits-Christop seems to suggest. Future peer review and curation services can and should filter community-based evaluations based on the contributor’s expertise, judgment and impartiality, just like journal editors carefully weigh the feedback from peer reviewers today.

We agree with Crits-Christoph’s statement that ‘we need to consider that the future of scientific publishing must improve upon, as opposed to tear down, mechanisms of gate-keeping and clear communication from the scientific community on what is and is not within the realm of good science.’ We believe that our proposal promises to deliver on that challenge. In fact, we are concerned that adhering to today’s inefficient and non-transparent publishing practices will not prevent but rather invite pseudoscience to masquerade as science. Crits-Cristoph’s examples illustrate that this threat is not imaginary but very real.

No competing interests declared.