Reader Comments

Post a new comment on this article

Study metrics are flawed

Posted by djao on 27 Dec 2014 at 02:38 GMT

Efficiency in scientific publishing is not measured by how quickly you can re-create an existing article verbatim. If this were indeed an important task in a scientist's life, then I would agree with what this study is measuring, and I would agree that Microsoft Word is more efficient for this task. But I have never in my entire career ever found myself in a situation where I needed to replicate an existing article word-for-word.

In the context of real scientific research, where typing the article is just one of a million tasks that I am doing as part of the publication process, the speed and accuracy of the typing process is really meaningless. Much more important are: Can I collaborate with others? Can I use git? (Good luck with Word files in git.) Can I interface with my software tools? (My software generates tables for me; I'm not going to manually type up a table as was done in the study unless I absolutely have to.) Can I maintain one single bibliography database and have that one single bibliography support every citation in every paper that I have ever published?

The study authors were very careful to recruit study participants from a wide variety of academic disciplines, but each participant was asked to reproduce only one single article, an article in the field of cognitive science (the same article for each participant). This is useless! The software's efficiency at typing up a cognitive science article says nothing about the software's efficiency at typing up an article in another field, such as my field (mathematics).

In short, the study measures a useless metric (verbatim copying accuracy) and does so in such a way that its conclusions do not apply to articles from other scientific disciplines.

No competing interests declared.