Australasian Science: Australia's authority on science since 1938

False findings: The rise in retraction of scientific results

By Andi Horvath

Medical journalist and educator Prof Ivan Oransky talks about research misconduct that, once uncovered after publication, leads to retractions of scientific papers, damaged careers, and an undermining of the scientific process. Prof Oransky suggests why retractions are on the increase, and how technology is being enlisted in the fight against fraud.

ANDI HORVATH
I'm Dr Andi Horvath. Thanks for joining us. Today we bring you Up Close to an essential part of what makes science a reliable evidence based source of new knowledge, peer review. Science prides itself as a self-correcting form of knowledge generation. Researchers typically collect, interpret and document their experimental data according to the accepted conventions of laboratory science. Now that documentation includes the preparation and acceptance of a scientific paper for publication in a scientific journal. To accept a paper for publication, a journal assigns scientists and experts in the area of the publication; the so-called peers in the term peer review. They scrutinise the draft paper's details, the methodology employed, and the soundness of the author's interpretations. They then recommend to the journal whether to publish or reject the paper, or they may advise to go back and fix things before re-submitting. Once a paper is accepted and published, this allows for other scientists to quote the paper as valid knowledge, but it's still provisional knowledge. So the peer review process builds critical foundations for science.

But what happens when a published paper is later found to be in error? For example, when scientific results are found to be calculated incorrectly, or they're found, on repeating the experiments in another lab, to be simply impossible to reproduce? Aside from the egg on the face of the researchers sometimes the peer review paper has to be retracted, and retractions, as we'll hear, are on the increase.

To discuss where and how the process can go wrong, we're joined by medical journalist and educator, Professor Ivan Oransky, from New York University. Ivan's written widely on the ethics of scientific publishing, and is co-founder of the popular online blog Retraction Watch. He's also vice president and global editorial director of clinical and policy news site Med Page Today. Ivan's here as a guest of the University of Melbourne's Office for Research Ethics & Integrity. Welcome to Up Close, Ivan.

IVAN ORANSKY
Thanks for having me Andi.

ANDI HORVATH
Now Ivan, when and why did you start the online blog Retraction Watch? What were the pivotal moments?

IVAN ORANSKY
So it was August of 2010. Adam Markus, who is my co-founder, and I were having a conversation and it was a conversation like many we'd had in the past about a given retraction. Something had happened and I said to him on the phone, I said, “Adam what about starting a blog about this?” The reason though why we were having that conversation about retractions was that Adam and I were working independently but we knew each other through medical journals and circles. He has a similar job to mine at a magazine called Gastroenterology & Endoscopy News for obviously gastroenterologists, unless you enjoy that sort of thing in which case you're welcome to subscribe.

Anyway, he had broken the story when he was at Anesthesiology News about a particularly terrible retraction - a set of retractions, about 20 of them - where this was somebody working on Vioxx. Of course Vioxx was a painkiller. It was supposed to be in some ways safer than some of the traditional painkillers. It turned out not to be. But in some research that had been going on there was someone named Scott Reuben an anaesthesiologist, who was looking at things like what happened to people who were taking Vioxx and then had to have anaesthesia, they had to have surgery. The problem was that Scott Reuben wasn't actually looking at any people because he was making up all the data. So he ended up having to retract more than 20 papers because again these were allegedly clinical trials involving dozens and even more patients, but they didn't exist. He'd gotten away with this for a while. He ended up actually going to prison for a short period of time, which was quite unusual. Adam had broken this story, and lots of people including me where I was at the time at Scientific American had followed up on that story. So we had this mutual interest.

What we realised was that many times when we looked at retraction notices, which tend to be opaque, not helpful, or they give hints at things that are not quite what happened, we would find a great story. As journalists that's really all we ever care about is finding great stories. But we also realised there was sort of a transparency problem. Here is science telling us - and actually in many ways better than many other human endeavours being truly transparent and self-correcting - and yet here's this area where they didn't seem to like to air their dirty laundry. So since we enjoy airing other people's dirty laundry as journalists we started this blog. And it took off very quickly.

ANDI HORVATH
That's an extraordinary story of fraud. Ivan, give us a picture of the types of scientific retractions, because some can be good, others can be bad, and then there's the downright ugly, and it ranges from genuine mistakes to perhaps rorting the system intentionally.

IVAN ORANSKY
Sure, so there are about 500 retractions a year, give or take. That has increased over time. From 2001 to 2010 the number went from about 40 to 400 and so now we're up a little bit more. Keeping in mind of course that that's out of about one million, 1.5 million papers published every year, so still a vanishingly rare event and I think that's important to keep in mind. But about two thirds of the time these retractions are due to something that would be considered misconduct. In the US - and this is consistent across many countries - we have the Office of Research Integrity, which is unique to the US. Their definition of fraud - so it's the sort of federal definition of misconduct - is fabrication, falsification, and plagiarism.

Fabrication and falsification are what you might think of as faking it, right? You may have made up your results. You may have made your results look better than they actually are. You may have taken an image and Photoshopped it the way that somebody might Photoshop themselves to look thinner or to look taller, to be better looking, and so you've made your results better looking. But then there's also plagiarism which is in there too, and that's of course stealing someone else's text. So again about two thirds of the time it's due to these misconduct problems. About a third of the time it's due to what we would consider honest error, or in some cases we're not quite sure. But a third of those cases, honest error.

We had a case for example early on that we wrote about. We were reading this retraction notice, and it was quite clear that the researcher was very upset about it, and actually we had an email exchange and that only bore that out. What had happened was someone in his lab had ordered the wrong mice. It's very simple to do if you look at how to order mice. There's all this terminology and minus signs and slashes and numbers and letters, so it's very easy to do. But he felt terrible about it. Now, it turns out that nobody had noticed this except his lab, because they tried to repeat the experiments and do what you'd think was the next experiment in this series to try and build on it. Their results just didn't make any sense. They realised afterward the reason they had gotten what looked like a positive result was because the mice just did that anyway because they were the wrong mice. Again, he felt terrible.

But that's an honest error, and we should applaud that sort of behaviour. We have a whole category on the blog called Doing The Right Thing where when we can see that someone has - at some cost, because retracting a paper - as a journalist I don't like retracting or correcting anything but we do it because we have to do it to keep the record straight. Retracting a paper has a personal cost and a career cost, even if it is for an honest error. So when we see someone doing the right thing by taking some of that risk upon themselves, we want to applaud that. That's the way it should be.

ANDI HORVATH
Ivan, can you clarify, is there an increase in retractions, and if so, what do you think's going on?

IVAN ORANSKY
The number of retractions has definitely increased somewhat dramatically. It's grown tenfold in the decade ending 2010. But at the same time the number of papers has also increased every year, so you would expect to see some increase. But in fact the number of retractions has far outpaced the increase in the number of papers published. The number of papers published has grown about 44% over that same period of time, the number of retractions tenfold. So something is happening. We are better at finding these things. We are better at finding, for example, image manipulation. The same technology that allows people to Photoshop also in reverse allows people to find these problems. Plagiarism detection software really wasn't in use at all ten years ago, and isn't even in a hundred per cent use now, but it didn't exist even before that.

So you would expect to see an increase in that. So we're definitely better at finding these things, and that does account for a good percentage of the increase. But there's some evidence - and I would call it more circumstantial evidence right now - that in fact the amount of fraud is increasing. It's certainly reasonable to think that that's due to the increasing pressures on researchers, the incentives to publish or perish. We all have heard that term. It's true. If you don't publish in the top journals you will not get tenure, you will not get grants, you will not get promoted. So there is tremendous incentive to do whatever you need to do to get published.

Now for many people, for most scientists, that means working harder, working smarter, working better. But there's a limit to that and sometimes people think the easy way out, which is faking the results or cutting other corners - cherry picking results for example. You do the experiment 20 times and you only report the one that looks the way you want it to. Well those are all some variation of misconduct. That's not correct. It's not the way it's supposed to happen. Yet we see time and time again that's what is leading to a lot of these retractions.

ANDI HORVATH
So what are the consequences for the various types of retractions?

IVAN ORANSKY
It's pretty interesting. On the one hand there isn't that much good news about what happens to people who have to retract. When I say not much good news, what I mean is there was actually a study that found that more than half of the people who had been found to have committed misconduct in the US by the Office of Research Integrity - this agency charged with that - actually were in the same positions that they were before if you looked years later. That means there aren't probably as many consequences as you would think. On the other hand there was a study that came out looking at retractions and retraction notices, and what happened in terms of how many times papers were cited.

This is a very important thing in science. It actually is how many journals, scientists and universities judge themselves: how often do others refer to that work? The more someone refers to the work the more it's thought to be important. Same way Google looks at how often something gets linked to that bumps it up in the search rankings. It's actually literally the same concept. So what happens to people whose work is retracted for clear fraud discovered by other people is that they actually see a decline in the number of citations to their work, which you would expect. Not only that, there's collateral damage. There's people around you, your colleagues, and also the whole field actually has decline in the number of citations. So that is its own punishment. It's pretty close to death in science and in research.

On the other hand if you've retracted a paper for an honest error, and that's clear in the retraction notice, what we'll see is that there's actually a bump - an increase - in your citations. So we like to think about this as rewarding good behaviour, rewarding coming clean. That isn't universal and we shouldn't make too much of a single study that showed this. But it's pretty encouraging and if it does pan out that suggests that one of the things we've always said - which is fulsome retraction notices that say exactly what happened, that level of transparency - the same way a correction in a newspaper should say exactly not just what was corrected but how it happened if possible so that it can be prevented the next time, that would really help science and integrity.

ANDI HORVATH
I'm Andi Horvath and you're listening to Up Close. In this episode we're talking about the retraction of scientific papers with medical writer and publishing ethicist Ivan Oransky. Ivan, tell us about how Retraction Watch was received by the community.

IVAN ORANSKY
Retraction Watch in general in terms of the way scientists have responded to it, it's actually been very positive and to us very humbling. We seem to have tapped into a large degree of what I would say is frustration with the self-correcting nature of science, and of the scientific literature. Ten years ago what you would do is you'd sort of write a letter to a journal if there was what you thought was a problem in the paper and it would take a couple of weeks or a couple of months even for anyone to get back to you. Then they would send it to the author of the original paper and get a response. It was a long process, most of which was completely behind closed doors. Most of the time the allegations were ignored or brushed off if they weren't ignored. This created a great deal of frustration among researchers who wanted to do the right thing and correct the literature and weren't claiming fraud or anything. They just wanted things to be fixed.

Very quickly we were discovered by this community, and that community has been incredibly powerful for us, incredibly helpful for us. What they've done is to send us tips to help us, to help us when we make errors and to critique us. Some of them have been generous enough with some funding in fact so that we that could hire an intern. Because the MacArthur Foundation, and a couple of other foundations actually, noticed that we were trying to grow and that we needed resources to do that. So the MacArthur Foundation, a very large philanthropic organisation in the US, are the same people who give out the Genius Grants, although as I corrected my mother, this was a sort of pretty smart grant - pretty smart guy grant to me and Adam, not a genius grant. But they gave us a very generous grant of $400,000 to grow, to hire people and to build a retraction database, which doesn't exist and which will be very important.

There are certainly people who wish we would go away. I think that that's the nature of hard hitting investigative journalism, which we are trying to do. Of course there are always going to be people who do that. There are other sites that are I think in our ecosystem which we try to support like PubPeer.com, a great post-publication peer review site where you actually can critique papers anonymously but in a very rigorous way. We're big supporter of PubPeer which is PubPeer.com. They have legal threats because again there are people who don't want to see this happen and want mistakes to just be buried under the carpet, under the rug. But the vast majority of scientists are very supportive. They, again, send us notes and all sorts of things. They invite us to speak all over the world like in lovely Melbourne where I'm sitting right now of course. That speaks to I think the real power of the community and the fact that most scientists want to do the right thing.

ANDI HORVATH
Is there a problem that Retraction Watch will become the policing body and perhaps it's a place for whistle-blowers? Is that a problem, or it might even be a blessing?

IVAN ORANSKY
So there are lots of other sites, actually, that we think are a great place for people to blow the whistle, to raise allegations, or even to praise papers. PubMed, for example, which is the National Library of Medicine in the States, which has sort of become the place to look for all abstracts. I mean, that's what they do. In late 2013 they introduced a site called PubMed Commons where lots of people - anyone who has published a paper that is indexed there - can comment. That's taken off a little bit. I wouldn't say it's hugely successful yet. But it certainly has led to a lot of really good high-level conversations. PubPeer.com is a place where anonymous whistle-blowers can go.

So what often happens is that we - because of our resource constraints even now - we're starting to look into some allegations but we tend to focus only on retractions. So we sort of look at how the journal and how the scientists and the institution handled and adjudicated a particular issue, and we might comment on that report on it, get some of the background that isn't really there. But we don't have the wherewithal right now to be able to look into all these allegations. So we actually - when people send us allegations we recommend that they go to PubPeer. Then what ends up happening many times is that the papers get corrected because of those comments, the authors start to contribute to the discussion, papers get retracted because of the comments, and then we of course report on it because it's a retraction.

We're all open source, as is PubPeer. We're certainly in this to break stories. We're journalists, that's what we love to do. But we're not here to hide things or to limit coverage. We just want people to get out there and discuss things. If they're discussing it on Retraction Watch great. If they're discussing it somewhere else, great. We do a weekend reads on Saturday mornings where we link to lots of discussions happening in lots of places. That's really popular for us. If we can bring our audience to other places so that they can have conversations there, we're really excited about that.

ANDI HORVATH
How long does it take for a retraction to happen? Is this something that happens over days, months, years?

IVAN ORANSKY
That's a great question: how long do retractions typically take? So let me ask you, what do you think the record for time from publication to retraction is?

ANDI HORVATH
I reckon it's probably years, because it'll take a while for people to absorb it, maybe some people will trial it, some people might quote it, but their experiments may not work. Okay, I'm just going to estimate: three to five years.

IVAN ORANSKY
Three to five years. I will say I'm going to give you points for that and I'll come back to why in a second. But in fact the record is 27 years.

ANDI HORVATH
What?

IVAN ORANSKY
Which is actually longer than some scientific careers.

ANDI HORVATH
It is.

IVAN ORANSKY
This was a paper that was published in 1985 and then retracted in I think the beginning of 2013. It was for duplication, which is sometimes very inelegantly referred to as self-plagiarism. You can't actually plagiarise yourself. But you can duplicate your own work and try and publish it again, and that's what happened with this paper. But typically your guess was actually on the money because the average - what was found when someone studied this - was about three years. That's probably still true. This paper was done several years ago. But what we've noticed is that a lot of them are happening more quickly. The record shortest was actually on the order of 48 hours, it turns out. So there's quite a wide range. But most of them, it's probably - three years is probably about the average and the most common.

But we have seen a lot more because of PubPeer, because of the ability to comment online right away. We've seen a lot more that happened more quickly. So I'd be interested in someone - and when we have our retraction database you'll be able to do this fairly quickly - actually look at how are they happening now and is it more quickly. The interesting question to me is not so much how long they take from publishing to retraction, but how long does it take from the time an allegation is filed or raised to a retraction. Because to me that says a lot about how transparent and self-correcting the literature is.

ANDI HORVATH
I'm still worried about all these pre-internet papers that don't have the access of PubPeer and people going online and asking is this science right.

IVAN ORANSKY
In 2001 a woman died at Johns Hopkins - actually a nurse who worked at Johns Hopkins - because she volunteered for a clinical study. This was an early stage clinical study where they were just looking at safety. Well obviously, if she died, it was not so safe. Long story short, what had happened was someone was studying something that might have been useful for asthma but hadn't seen anything in the scientific literature about it, so thought oh no one's ever tried this, let me try it. The problem was actually that people had tried it. They had tried it in rodents. But the rodents had died when they had severe reaction to this when they tried it, so of course they didn't do any human trials. The reason he didn't know that and it didn't show up when he searched through the literature was because it wasn't digitised yet. This was really a tragic case.

So what happened was PubMed, the National Library of Medicine database, started digitising at least the abstracts. The cut-off at that point was 1966. So they're now back before the 1940s. I don't remember exactly when. But they've gone back decades and they continue to do that. Some of the journals are actually going way back even further. But to your question, a lot of the literature is still not available online. On the one hand people talk about the internet, oh you can publish anything, right, but before the internet very few people would see it. So having more eyeballs, it's like Supreme Court Justice Louis Brandeis was famously quoted as saying, sunlight is the best disinfectant. So in the modern day more eyeballs is the best disinfectant. Now, those eyeballs may be robots looking for plagiarism, but having more eyeballs on papers is a good thing.

So the whole notion of let's take advantage of the technology, let's have post-publication peer review, let's accept that the publication of a paper is not the final word. You used the words provisional knowledge at the beginning of our talk here, and that's absolutely true. We need to embrace. Journalists need to embrace that. It conflicts with their current business models, because they're all about particular papers and really trumpeting those papers and having them come out every week or every day or whatever it is. But they need to think about science as a process, which has become a little bit warped and difficult to see with the publishing system we have.

ANDI HORVATH
I'm Andi Horvath and our guest today on Up Close is Ivan Oransky. We're talking about the occurrence and issues surrounding the retraction of scientific papers. Retraction Watch has raised issues about the current system of publishing and the validation of knowledge. As Ivan said, some of it's become warped. So let's talk about accountability for the ugly side of retraction. Should the law be involved?

IVAN ORANSKY
Going to jail - which is not necessarily the answer for these problems - but going to jail is very very rare when it comes to scientific fraud. In the states we've had four cases really in the past 40 years or so. There's one pending here now in Queensland, the Crime and Corruption Commission is looking into charges against two researchers who were doing neuroscience. Again the details are a little fuzzy at this point but they are alleged to have committed fraud. They might face - again if not jail time - some sort of criminal penalty. There's a case in the US involving someone who actually spiked rabbit blood samples with human blood samples so that he could make it look as if an HIV vaccine was working. He's just accepted a plea bargain. We don't know the details at this point, but it will probably involve some kind of criminal penalty. But that's quite rare.

Again, there are a handful of countries - Italy there's a couple of people facing those sorts of things. It's not necessarily that everyone should go to jail who commits fraud. I mean not everyone who commits any crime goes to jail. But the fact that most scientific fraud - even if it's handled by any sort of federal agency or state agency - it's handled in a civil way or as if it was not as important as needing the kind of Department of Justice attention in the US. So we feel that there should be more in the arsenal of fighting fraud to deter it, and that some jail time or some kind of potential criminal penalty should be merited.

On the other hand it's not even the case yet that people or universities refund a lot of the research dollars - taxpayer dollars - that were used to commit the fraud. So that also seems like an important good step that would at least deter universities or at least help them bolster their own attempts to find fraud and to prevent it. So there's still a lot to be learned about fraud and there's still a lot of steps that we can take. Whether prison is the answer is I think a legitimate question to ask, but there should be more punishment.

ANDI HORVATH
Ivan, I'm going to give you a magic wand: what changes would you like to see in the future of publishing?

IVAN ORANSKY
I think that if journals and scientists could embrace post publication peer review. In other words, embrace science as it actually happens, and stop what we have said - what Adam and I have said - is fetishising the scientific paper as if it was just the be-all and end-all, then we would be in better shape. What that will require though is a change in the incentive system. Economists like to say that there are no bad people, just bad incentives.

If you look at the incentive system in science today - and it varies a bit by country and a bit by discipline - but essentially it all comes down to the scientific paper. Have you published a paper in a top journal, a high impact factor journal, that's beaten out others, that's competitive, and then if you have and you've published a number of them, that's how you'll get tenure, that's how you'll get grants, that's how you'll get promoted. But it relies completely on that. It doesn't rely very much on what you've contributed to other people or what kind of peer review you've done or any of that. So all the incentives point in a particular direction. That's what we need to somehow fix. I don't claim to know the answer, but I do think that post publication peer review is part of it. I also think that the way we fund science - whether or not there's more money - which is another question that of course scientists would like to answer in the affirmative - but how we distribute that funding is really going to be important.

ANDI HORVATH
On the issue of distorted incentives, is there the other side of the coin from companies and pharmaceutical corporations? Like is there pressure for researchers to come up with results that are commercially advantageous? Also Ivan, currently scientific research is only published if there are results. So what happens to experiments that don't show results or have an effect? Do we actually need a journal of negative results?

IVAN ORANSKY
Richard Smith, who's the former editor of the British Medical Journal - they now call themselves the BMJ. Richard has been a friend of Retraction Watch. I would say he's said nice things about us and promoted us. He's actually on, now, our board of directors for the non-profit we are creating. He famously wrote a piece once where he talked about what actually happened, how the sausage is made with journals. What a lot of people don't realise - and this is really true of medical journals - is that the way that they make a lot of their money - a lot of their revenue - is through what are called reprints.

So if you are a drug company and you've sponsored a study - or you haven't sponsored a study - and it's published in a major journal which gives it that peer reviewed imprimatur of good housekeeping seal of approval, you want to get that out to as many practicing doctors as you can so that they can prescribe your drug. So you will buy hundreds of thousands of copies, and the journals charge you a lot of money because of course it is quite lucrative and they understand you need them. They don't often talk about how much money that is, although we've learned from certain court cases that it can be $1 million a pop. So if you're doing one or two of those a week that's a lot for a medical journal.

Keep in mind, you're not, if you are a big drug company, going to buy lots of reprints of a study that says your drug doesn't work. I wouldn't expect them to, that's human nature. Here let me help my competitor. That isn't how most companies work, at least not the ones I know. What that creates is a real incentive to publish positive findings. Now, the way Richard Smith describes it, the publisher will come and say “it's terrific that you are publishing negative studies” in other words studies showing things don't work. “We want to encourage that academic freedom, scientific integrity, all of that, wonderful, well-done chap”. But then at the end of the year that person will come to you, the publisher, and say “I'm sorry but we haven't sold as many reprints as we used to because you're publishing all these negative results, and that's great, you really should continue doing it, but which of your editors would you like to sack because we're facing a deficit”. That's a real phenomenon that happens.

So just as there's incentives in science to publish in top journals, those top journals are really only interested in big big results, in positive results. So we need to create an environment - whether it's a journal of negative results, which have people have tried to, whether it's journals that simply are willing to publish those negative results or those studies that don't show anything new for example. We need to create an environment where that's okay and actually rewarded. That should be part of the incentive process as well.

ANDI HORVATH
Ivan tell us more about the Retraction Watch database. This is bound to have some great value.

IVAN ORANSKY
We really hope so. The MacArthur Foundation has given us a really nice grant, a way to get this database started. One of the problems is that retracted papers continue to be cited, continue to be referenced, as if they hadn't been retracted. So you're a scientist, you're doing some experiments, you're doing some analysis, you want to publish your results. Well you go and look at the literature, and maybe you downloaded a PDF of a paper a year or two ago, maybe you are looking on one of the indices or on a publisher's website. Now, if you're conscientious you will look and see whether the paper's been retracted. Of course the publishers don't make it easy. About a third of the time they don't even tell you that.

But if you're not as conscientious or you're rushed or you're just - like we all are - you've got lots of things to do, you might skip that step. That happens a lot. In fact more than 90 per cent of the time - this is actually according to a couple of studies that have been replicated - more than 90 per cent of the time you will cite that paper - these retracted studies - as if they hadn't been retracted. That's a problem. That's waste. It's pure waste. So what we're hoping to do with the database - and others will be helping us with this - is to seamlessly integrate this into any way that you would do a literature search, so that any time you look for a paper or a given author or a given subject even, you will find out what's happened to those papers. Our hope is that that will cut down waste. So we hope that it's actually a good investment even though we're a non-profit and this is not about profit for us or for anyone else. But we hope that that will actually just make science work more efficiently because that is what we want.

ANDI HORVATH
Retraction Watch provides a great service to the scientific community and valuable insights to the current system of scientific knowledge generation. Professor Ivan Oransky from New York University, thank you for being our guest on Up Close today.

IVAN ORANSKY
Thank you Andi, it's been a great discussion.

University of Melbourne