Australasian Science: Australia's authority on science since 1938

The Psychology of Misinformation

By Ullrich Ecker

Misinformation affects our reasoning and decision-making. Unfortunately, a number of cognitive factors limit the effectiveness of retractions and refutations, ensuring that misinformation sticks.

Misinformation may not a new problem, but the unprecedented access to information we enjoy in today’s “information society” has exacerbated its influence. People rely more and more on social media, blogs and (sometimes dubious) websites not only to get their daily news fix but also to get medical advice or information about scientific issues such as climate change.

To compete with the deluge of freely available information, traditional media also opt for sensationalised headlines as “click bait” designed to increase their web traffic and thus maintain their advertising revenue. Emotional and attention-grabbing: yes. Fact-checked: maybe not so much.

Moreover, the modern media’s fondness for “balanced” coverage – even in the absence of balanced evidence – contributes to the problem. For example, 97 % of active climate scientists agree – based on the available evidence – that humans are causing global warming, yet the Australian public believes there is still a scientific disagreement on the issue. Arguably, one of the reasons for this divide is that the media continues to give both sides of the public debate equal space (or air time).

Easy access to information is, of course, preferable to censorship and imposed restrictions. However, initial hopes that better access to information will lead to a better-informed public have been dampened. This is not just because misinformation is “out there”; it is because of what misinformation does to our mind.

Misinformation affects our reasoning and decision-making. We have found, unfortunately, that the way our mind tends to work limits the effectiveness of retractions and refutations. In simple terms: misinformation sticks.

To investigate the “stickiness” of misinformation in the lab, we present people with a report about an event or a script about a causal relation in the world, and later on retract or refute a piece of critical information for some people but not others. We also use control groups that don't receive the critical information at all.

We have done this with undergraduate students and people from the wider community, and even with representative samples from Australia and the USA using online surveys that ask them to make inferences, draw conclusions, predict future developments or indicate their behavioural intentions. We then measured how their reasoning and decision-making was affected by the critical misinformation, in particular after it had been corrected.

Previous research by myself and others has found that once a piece of misinformation is processed by a person it is very difficult to undo its effects. You can’t just take it back. Retracting the misinformation – just saying that it’s not true – does very little. Memory is not a tape recorder that can overwrite stuff, and there is no magical eraser for the mind’s whiteboard. At best, a piece of misinformation is tagged as incorrect. It will stay available in memory and will influence people’s reasoning and decision-making, even after they have received, understood, believed and demonstrably remembered the retraction.

For example, if we present a news report about a bushfire that first suggests arson but then concludes that there was no evidence that the fire was deliberately lit, people will still refer to arson in their reasoning. Typically, what we find in the lab is that a credible retraction only reduces a person’s reliance on misinformation by about half, and sometimes a retraction has no observable effect at all.

One way to think about this persistence of misinformation is the notion that people build mental models of the world, and use these models to think about stuff (because we think inside our heads and the world is mostly outside of our heads). If a piece of misinformation is part of such a mental model, its retraction will leave people with a gap in their understanding. People do not like having such gaps, so as long as there is no plausible alternative provided, they often revert back to the retracted misinformation to reason about an issue in a consistent manner. In other words, people prefer complete but incorrect models over incomplete models.

So, the inaccurate speculation about arson causing a fire will continue to influence people’s reasoning even when there is a clear retraction. For example, people exposed to the mis­information and the retraction will more strongly endorse increased funding for arson prevention than a control group. Even if a plausible alternative is provided – such as evidence that a lightning strike started the fire – reliance on the retracted mis­information is reduced but its effects remain measurable.

The reason I used this fictional example here is that the effects of misinformation are amplified by repetition: the more often you hear a myth, the more likely you will believe it. This holds true even if you hear it repeatedly from the same source. Repetition makes things familiar, and people trust and believe familiar things. Trust me, there’s a good reason why fast food chains and other companies invest millions of dollars into advertising campaigns that repeat the same messages over and over again. While repeating misinformation doesn’t make it true, it does enhance its effects – and hence I hesitated to repeat and thus potentially strengthen an actual myth in the example above.

Unfortunately, in order to refute a myth you usually need to repeat it – how am I going to explain to you that a fire was not caused by arson without mentioning the arson allegation? Although this seems like an innocuous truism, it actually has profound psychological effects.

A recent study illustrates this. We gave people a set of claims that were either true or false. However, the veracity of these claims was unclear (e.g. did you know that the colour of an egg is related to the colour of the chicken’s ear lobe? Neither did I!), and people thus rated their initial belief in the claims at around 5–6 out of 10 on average.

Then we told people which claims were true and which were false. Not surprisingly, people’s belief in the facts went up to about 9 on that 10-point scale and stayed there for up to a week. Likewise, people’s belief in the myths initially went down to about 1 out of 10, but after about a week the false beliefs started to bounce back.

We argue that this rebound effect occurs because presenting the myth in order to retract it makes it more familiar. After some time, when people’s memory for the details of the retractions has faded, the myths’ familiarity leads people to again falsely accept them as true. Thus, by repeating the myth and then saying it’s false, one may inadvertently reinforce it.

We have found that one effective way to reduce the effect of a myth on people’s memory and reasoning is to warn them that they may be exposed to misinformation ahead of time. In this study, people who had been warned processed incoming information more carefully and strategically reduced the impact that misinformation had on their subsequent reasoning.

If I tell you now that you are about to read a myth you will be prepared and cognitively “on guard”, and will thus be less affected by the misinformation. So now that you’ve been warned I will use examples of real-world misinformation to explain how retractions are even less effective when they violate a person’s worldview.

In general, people look for and preferentially process information that confirms what they already know and what they believe. This is called “motivated reasoning”.

For example, someone who already believes that vaccinations are nothing but a profit-making scheme by the pharmaceutical industry will be more likely to believe one of the many myths surrounding vaccines despite the evidence of how many lives vaccines have saved. Such misinformation – for example the myth (careful, be prepared!) that some vaccines can cause autism – has measurably reduced vaccine uptake and has thus led to many preventable cases of serious disease and deaths, not to mention the enormous waste of public funds spent on research and public information campaigns trying to refute the myths. Likewise, a person who strongly opposes any regulation of the free market will be more likely to believe the myth (remain on guard!) that the planet is cooling instead of warming because mitigative climate action may lead to market regulations.

The same rationale applies to retractions: if a claim that a person wants to believe is retracted, belief in the claim is not reduced; in fact we often find that belief in the claim actually grows stronger after its retraction, even when the retraction is unequivocal. We have termed this “the worldview backfire effect”.

A recent study in my lab demonstrated this effect in the area of politics. If political party X is brought into disrepute because of presumed misconduct, but this is later retracted, supporters of party X will accept the retraction and no longer refer (much) to the misconduct. However, supporters of the opposing political party will refer to the misconduct even more in their reasoning than they would without a retraction. This worldview backfire effect can lead to a polarisation of beliefs when people are faced with corrective evidence that is congruent with the worldview of one group but dissonant with the worldview for another, and thus makes evidence-based approximation and agreements difficult.

People’s worldviews also determine to some degree whom they trust. This can be problematic to the extent that trust is put into a source with no relevant expertise.

One would think that people are more affected by a retraction when it comes from an expert source. For example, someone who read the myth that the climate was cooling should be more convinced by a retraction from an expert climate scientist rather than a retraction from their uncle Bob.

However, recent data from my lab suggest this is not the case. The expertise of the retraction source seems to have no effect, whereas what matters is trust. In other words: only retractions from a trustworthy source are effective, irrespective of perceived expertise. We could call this “the Uncle Bob effect”.

The effects of misinformation can be serious, and many cognitive mechanisms limit the effectiveness of retractions and refutations. It is therefore important to raise public awareness about these effects, and promote educational campaigns to enhance the public’s scientific literacy and foster a healthy sense of skepticism.

And next time you have a health concern, go see your doctor and don’t ask Uncle Bob.

Ullrich Ecker is an Associate Professor at the University of Western Australia’s School of Psychology.