Are we publishing unreliable research?

by Chris Crockett on December 6, 2013

Publish or perish. Like it or not, prolific paper publishing dictates academic career success.  The quest to get the grant, land that postdoc, achieve tenure means that the necessary dirty work of science—replication—often gets brushed under the carpet.

The Economist recently published an article—Unreliable research: Trouble at the labthat looks at just how bad scientists are at checking themselves.

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.

While most of the article focuses on trouble in the life sciences and psychology—areas where results influence high stakes factors like profit and policy—astronomy is definitely not immune. In the article, James Ulvestad, head of the NSF Astronomy Division, points out that merit panels “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists.”

The article suggests a path forward, quoting from Bruce Alberts’—former editor of Science—congressional testimony earlier this year:

Journals must do more to enforce standards. Checklists such as the one introduced by Nature should be adopted widely, to help guard against the most common research errors. Budding scientists must be taught technical skills, including statistics, and must be imbued with scepticism towards their own results and those of others. Researchers ought to be judged on the basis of the quality, not the quantity, of their work. Funding agencies should encourage replications and lower the barriers to reporting serious efforts which failed to reproduce a published result. Information about such failures ought to be attached to the original publications.

The entire article can be found on The Economist‘s website.

How can the astronomy community apply these lessons to our own research? Have you seen evidence of widespread poor research being published? Have you run into “anti-replication” bias among colleagues and funding agencies? What can we do to improve the reliability of our work? Let’s hear it in the comments!

{ 5 comments… read them below or add one }

1 TMB December 7, 2013 at 11:45 am

I think this is only really an issue in topics that aren’t really active. In a hot topic, everyone is getting more data and making better measurements, so if an early measurement turns out to be unrepeatable, the new better data will show that. Only if an area doesn’t have many active groups in it is it likely that an unrepeatable measurement will stand for a long time.

Of course, that isn’t to say that such areas don’t exist – what topics are hot changes from decade to decade, or that there are no unreplicable papers in hot topics. But more of the research is being done in the areas with more researchers simply by virtue of there being more researchers doing it, so most research is done in areas where there will be scientific self-correction when those unreplicable results appear.

Reply

2 Andy December 7, 2013 at 12:01 pm

I tell my students that if they publish garbage, one of two things will happen – there will be a refutation of it, or there won’t. The first is bad for your career, but the second is even worse – it means no one is interested in your work. MORAL: Be careful in everything you do.

Reply

3 Mike December 9, 2013 at 1:41 pm

When the problems relate to statistical methodology I don’t think that the astrophysical literature is always self correcting, even if a topic is popular. If an astrophysical result is flawed due to flawed statistical methodology, later studies that use the same flawed methodology may reproduce the erroneous results. These problems would get corrected in the statistical literature, but most astronomers don’t read the statistical literature, so such errors may go unnoticed and propagate themselves to new studies.

Reply

4 Jabran December 9, 2013 at 2:51 pm

I think it is important to recognize that there is often time strong confirmation bias, which we are all susceptible to to varying degrees. I know that when I get a result that seems inconsistent with what I expected, I will generally be much more skeptical. In which case I may be more likely to spend extra time double and triple checking my work. I think we all have expectations of results and these are largely informed by the body of literature in our field. So even if an idea is popular, it certainly does not make it right. But if our data disagrees, we may go back and see if we “missed something” in order to bring it back in line with expectations. I think honesty and integrity should not be taken for granted as the norm when doing science. We have a responsibility to be honest with ourselves first and foremost about how we do science.

5 Joseph Wang January 11, 2014 at 6:14 pm

I’ve found that some of these issues are much less bad in astronomy than in biomedical sciences or (gag) economics.

There are a few reasons for this. One of the more important is that the journals in astronomy really aren’t gatekeepers. If you have data, you publish, and it’s not hard to pass peer review. In biology, the journals consider themselves gatekeepers, which means they will only publish “significant” research, which leads to some huge biases.

Also the journals don’t have a “lock” on data. You can get your data to the community through a lot of mechanisms that bypass journals. The other thing is that you can just publish data. In biology and economics, you have to have a specific hypothesis, whereas in astronomy, you can say “well I just looked at Jupiter and this is what I saw.” The fact that the data is not always linked to a hypothesis reduces confirmation bias.

The other thing is that frankly, astronomy papers results don’t affect funding. It’s not like biology (or worse yet economics) where there may be millions or billions of dollars riding on the result of a paper. I can’t off hand thing of a situation in astronomy where the result of a paper caused money to change changes, but this happens a lot in biology and in economics (and it’s terrible).

One thing that’s nice about astronomy and astrophysics is that the publication system isn’t nearly as dysfunctional as in other fields.

Reply

Leave a Comment

Previous post:

Next post: