Australasian Science: Australia's authority on science since 1938

Brain Stimulation & Memory: How Strong Is the Evidence?

neuron

The data show convincingly that tDCS does not enhance brain function.

By Jared Cooney Horvath

For nearly 15 years, scientists have reported that running a weak electric current through the brain can improve learning and memory. What if we got it wrong?

If you’ve ever licked the end of a 9-Volt battery, you know the result can be an eye-opening jolt because the tongue is a conductor that completes an electric circuit between the ends of the battery. Another organ in the body that’s a conductor is the brain, and 15 years ago a group of German researchers reported that they were able to use the brain to complete an electric circuit in the same manner. But here’s the interesting bit: these researchers reported that as the electric current flowed through the brain, activity within the brain was significantly enhanced.

The technique of running a weak electric current through the brain to modulate neural activity is today called trans­cranial direct current stimulation (tDCS). Since 1999, more than 1400 scientific articles have been published suggesting that tDCS can be used to improve memory, accelerate learning or enhance physical performance. There are currently three tDCS devices available for public purchase without the need for a medical prescription, so there’s likely to be hundreds of people zapping their brains right now in the hope that they are supercharging their cognitive and behavioural abilities.

As my background is in education, I decided to undertake research to determine whether tDCS could improve classroom performance in students with learning difficulties such as attention deficit disorder and autistic spectrum disorder. For 18 months I put tDCS through the ringer, conducting numerous experiments using various learning, memory and motor tasks. Unfortunately I was unable to find a single significant effect using these devices.

After ensuring that my tDCS machines were functional (check), my experiments were sound (check) and my analyses were accurate (check), an important question occurred to me: what if tDCS doesn’t enhance brain function as scientists have long believed?

Luckily, there is a way to address this question – it’s called meta-analysis. The idea is fairly simple: if we accept that any individual experiment has the potential to produce erroneous results due to faulty equipment, inappropriate analysis, under-engaged participants etc., this can be accounted for by extracting data from many experiments, pooling the data into a single data set and analysing this unified data set. By treating all the data from many different (but similar) experiments as a single data set, any erroneous results obtained from a single experiment will be diluted and we will be left with the true effect of an intervention.

So that’s what I did. For months I scoured the 1400 published tDCS articles, extracted relevant data and created several large, unified data sets. When I analysed everything together I found something interesting: it turns out that the data show, quite convincingly, that tDCS does not enhance brain function. In fact, tDCS has little-to-no reliable effect on brain activity, behaviour or cognition whatsoever!

How did this happen? How can collating 15-years worth of scientifically valid data result in no effect?

Several practical explanations at the experimental level may help to elucidate this occurrence. The first is a lack of a proper control conditions. A basic tenet of scientific research – especially research exploring the effects of an intercessory device or treatment – is the inclusion of a control condition. The purpose of this is to ensure that any generated effect is truly a result of the intervention and not the result of secondary, unaccounted-for variables.

Interestingly, more than 80% of the foundational tDCS research did not include a control condition. Rather, these studies simply compared tDCS to itself or to nothing at all. Without a proper control it is possible that researchers were simply attributing typical, natural fluctuations of brain activity to tDCS.

Another practical explanation is non-commensurate outcome combination: a concept best explained using a specific example. A claim made quite often is that tDCS can improve working memory – a claim supported by a number of published research reports. However, a closer examination of these reports reveals that although each experiment did find a significant effect of tDCS on working memory, the aspect of working memory improved by tDCS differs between experiments. For instance, some report that tDCS improves working memory accuracy, while others report that it improves working memory speed and others report that it improves false-alarm rate, response confidence, correct-rejection rate etc. It’s clear that although these measures all fall under the same domain – working memory – they are non-commensurate and cannot be meaningfully combined to support a unified claim of efficacy.

A final practical concern is a lack of direct replication. Scientific evolution is largely predicated upon the replication of an experimental finding across different laboratories. Interestingly, direct replication of published experiments in many scientific fields is rarely undertaken and even more rarely reported. We, as researchers, more often place our trust in the veracity of the literature and conduct experiments that add to or progress-from previously published work.

With regards to tDCS, fewer than 50 experiments to date have been directly replicated – and, as noted above, meta-analyses revealed that none of these generated a significant or reliable result). In fact, more than 250 experiments have not been directly replicated, so it is unclear if the majority of the tDCS field is reliable or simply predicated upon natural fluctuations and/or statistical false positives.

Beyond practical explanations, the possible absence of any true tDCS effect highlights several theoretical and philosophical concerns with the practice of science.

The first of these is the existence of choice in research. Although many are taught that research is an objective undertaking devoid of intention or bias, the truth is that researchers make choices on a daily basis. We choose which questions to ask, which data are relevant and which are safe to ignore, which analysis tools to use and which findings to report. Each of these choices is coloured by assumptions held by the researcher and born from the established literature.

For example, if I choose to run an experiment exploring the effect of tDCS on attention, embedded within this choice are the assumptions that attention is a function that is largely mediated by a definable region or network within the brain; that modulating activity within this region or network can generate a measurable change in the behavioural manifestation of attention; and that tDCS can modulate this region or network.

Because the bases for these assumptions lie in the scientific cannon, if I fail to find an effect of tDCS on attention I am more likely to blame my experimental set-up than the assumptions themselves. My next step would be to tweak the experimental parameters until I obtained the outcome I expected based on my assumptions. In fact, this is what I attempted for the first 18 months of my tDCS research – with no success.

Unfortunately, the foundational theories of tDCS may be incorrect. This means that many assumptions researchers are using to guide their choices about which experiments to run and which results are “acceptable” may also be incorrect. If we change our underlying assumptions we will change the choices we make in our experiments. If we change our experimental choices we will likely choose to look for (and find) far more null effects than positive effects from tDCS.

Interestingly, this leads to another important theoretical/philosophical concern: experimental interpretation. Although the common conception of science is one of “data” as an explicit series of events and “scientist” as a disinterested party impartially drawing cause/effect links between these events, the truth is a bit less certain. In reality, many data points are isolated and ambiguous. As such, scientists must choose which events are correlated and interpret the implications of these relationships. Interestingly, as with all human endeavours, scientific interpretations are undoubtedly coloured by the scientist’s own experiences, theoretical leanings and assumptions.

For instance, the very same maps created by John Snow to trace an outbreak of cholera to the Broad Street water pump and “prove” the germ theory of disease were used to equal effect by his contemporaries to support the contradictory miasma theory of disease. Similarly, modern analyses of the solar eclipse photographs used to “prove” Einstein’s theory of relativity provide equal support for traditional Newtonian physics. In both cases, data were ambiguous and meaning was supplied via interpretation based on a priori (if competing) assumptions.

With regards to tDCS, as current assumptions support the efficacy of this technique, researchers are more apt to interpret ambiguous data as favourable to tDCS. In fact, an examination of the majority of tDCS research papers reveals that significant findings are often couched among a greater number of non-significant findings. For example, a paper may report that tDCS improves motor learning while also reporting that tDCS does not impact motor speed, accuracy, skill, emotions, physical sensations or non-motor learning.

Interestingly, researchers tend to focus only on the significant findings in lieu of the greater proportion of non-significant findings. This pattern is more suggestive of interpretation based on a priori assumptions than of interpretation based on unambiguous data.

In the end, it’s my hope that the compelling story of tDCS serves as an impetus to breathe more context and authenticity into public discourse concerning research and the scientist. In an increasingly scientific world, ensuring that public decisions are based on the veracious notion that science is a human enterprise with successes, failures and relatable stories will be far more beneficial than basing decisions on the intimidating notion that science is beyond scrutiny.

Jared Cooney Horvath is a PhD candidate in cognitive neuroscience at The University of Melbourne and President of The Education Neuroscience Initiative.