Australasian Science: Australia's authority on science since 1938

Lie to Me

cartoon

Image: Simon Kneebone

By Michael Cook

Will brain scans revolutionise our legal system?

On 12 June 2008, 24-year-old Aditi Sharma became the first person to be convicted of murder based on a brain scan. The prosecution alleged that the MBA student had organised a tryst with her former fiancé at a McDonald’s in the Indian city of Pune. There she had given him sweets laced with arsenic.

Ms Sharma protested that she was innocent, and the police gave her a novel chance to prove it. She agreed to have an electroencephalogram that would be analysed by software developed by Gujarati neuroscientist Champadi Raman Mukundan. He called it a Brain Electrical Oscillations Signature test (BEOS).

Ms Sharma said nothing during the procedure, but when details of the crime were read out, sections of her brain lit up. The prosecutor successfully argued that BEOS analysis proved that she clearly had “experiential knowledge” of the murder. In his judgement, Judge Shalini Phansalkar-Joshi stated that the expertise of the BEOS operator “can in no way be challenged”. He sentenced the young woman to life imprisonment.

The case is still not settled. In September of the same year India’s National Institute of Mental Health and Neuro Sciences declared that brain scans were unreliable in criminal cases. Ms Sharma thereupon appealed to the high court, complaining that her conviction had been based on “bad science”. She was released on bail — although it may be years before her case is reviewed.

Neuroscientists and bioethicists elsewhere were horrified that this new technology had been accepted as incontrovertible evidence. Someday we may invent a perfect lie detector, Hank Greely, a Stanford University expert on neurolaw, told the International Herald Tribune. “But we need to demand the highest standards of proof before we ruin people’s lives based on its application.”

Nonetheless, many neuroscientists have little doubt that their specialty will eventually transform how the law works. Michael Gazzinga, a leading American neuroscientist, has even declared that someday it will “dominate the entire legal system”.

The stakes are high. Currently the criminal law distinguishes between people who have caused harm deliberately (first degree murder) from those who caused it inadvertently (involuntary man­slaughter). But some influential research in cognitive neuroscience suggests that there may be no essential difference as the sensation of volition happens a fraction of a second after our brain has already determined the action. In other words, free will is an illusion. According to many specialists in the burgeoning field of neurolaw, the clash with the traditional understanding of innocence and guilt means that criminal law must be “radically reconceptualised”.

Determination of guilt and innocence is not the only issue that could be decided with a brain scan. Vast new areas could open up within the legal system.

• Civil law suits that turn upon mental capacity could be settled with objective criteria. A lawyer could prove that his client entered into a contract that was too difficult for him to understand.

• People suing for compensation could give verifiable proof that they are suffering from bad backs or psychological trauma.

• Parole boards could be reassured that prisoners have been rehabilitated. Brain scans could verify that sex offenders no longer pose a threat to the community.

• Criminals could present them to mitigate the severity of their sentences because of brain abnormalities.

• Defence attorneys could request that prospective jurors be scanned to detect whether they harbour prejudices against a client, or whether they are more likely to insist upon a harsh sentence.

At the moment, the main gateway for neurolawyers to peer into a client’s mind is a functional magnetic resonance imaging machine. This is a gigantic – and expensive —metal doughnut that yields images of the brain quickly and with a high degree of spatial and temporal accuracy.

Two companies are currently marketing functional nuclear mangnetic resonance scans for lie detection in the US: No Lie MRI in California and Cephos Corporation in Massachusetts. On its website, No Lie MRI estimates that the market for accurate truth verification is at least US$3.6 billion. But because fMRI scans are far more accurate than old-fashioned polygraphs, with their quivering needles tracing a graph of a subject’s perspiration and pulse, their financial potential is far greater.

The only hitch is that US courts have so far declined to admit fMRI scans as evidence in trials. However, defence lawyers are constantly probing the system. As the technology improves, sooner or later a judge will accept them.

The latest high-profile American attempt concluded in June in Tennessee where Lorne Semrau, the CEO of two nursing homes accused of rorting Medicare is pleading that he had acted in good faith. He submitted brain scans taken by Cephos to demonstrate his sincerity.

The judge dismissed them as evidence, but significantly added that “in the future, should fMRI-based lie detection undergo further testing, development and peer review, improve upon standards controlling the technique’s operation, and gain acceptance by the scientific community for use in the real world, this methodology may be found to be admissible”.

Broadly speaking, there are two kinds of critics of neurolaw. The first accept that fMRI technology will be extremely useful in assessing guilt or innocence – but not yet.

Theoretically, fMRI scans are far more accurate than old-fashioned and discredited polygraphs because they are measuring truthfulness itself, not anxiety about being accused of deception. Telling the truth comes automatically, but lying requires an executive decision to withhold a truthful response.

What the fMRI does is measure changes in the magnetic properties of oxygen-grabbing haemoglobin molecules in red blood cells. When a subject tells a lie, the scan captures the presence of greater activity in regions of the brain that have been linked to deception.

However, there is more to thinking than haemoglobin. Those brightly coloured images in fMRI scans are not photographs of the state of the brain but composite statistical representations distilled from recordings taken seconds apart. Their accuracy depends completely upon how well the experiment was planned and executed.

A brain telling a lie can only be detected if it is compared with how an average brain tells the truth. Even if we know what an average brain looks like, it is hard know whether or not this brain is an outlier – a truth-telling brain affected by prescription drugs or a genetic anomaly, for instance. The scans are so complex that they require experienced judgment to assess them properly. And judgement can involve bias.

Furthermore, can scans give accurate testimony about past mental states? If, for instance, a criminal asks for his sentence to be mitigated because of a brain abnormality, it is nearly impossible to assert that this existed when he committed the crime months or even years ago. Brains change with time and experience.

In the Semrau case, the judge noted yet another hurdle. “While it is unclear from the testimony what the error rates are or how valid they may be in the laboratory setting, there are no known error rates for fMRI-based lie detection outside the laboratory setting, ie, in the ‘real-world’ or ‘real-life’ setting,” he wrote. There is a world of difference between telling a white lie in a psychologist’s laboratory and telling a real lie about a murder.

Yet another challenge comes from other scientists. Early in 2009 many neuroscientists were infuriated by an incendiary paper in the journal Perspectives on Psychological Science, which suggested that most inferences drawn from fMRI scans were basically flimflam. The author, Ed Vul, a postgraduate student at the Massachusetts Institute of Technology, pulled no punches in “Voodoo Correlations in Social NeuroScience” – although he later gave it a blander title.

As a statistician, he contended that “a disturbingly large, and quite prominent, segment of fMRI research on emotion, personality and social cognition is using seriously defective research methods and producing a profusion of numbers that should not be believed.”

Is it fair to use brain scans to send people to jail if fundamental issues like these are still being debated?

Vul’s skepticism supports the other school of critics of neurolaw. How do we know that those seductively coloured images of the brain represent the mind of the person who is being monitored? In other words, aren’t the brain and the mind distinct?

If they are the same, then the mind is only an immensely complex physical system – a machine, basically. All of our actions ultimately have a physical explanation. If this is the case, doesn’t the traditional concept of criminal responsibility change when prisoners start pleading that “my brain made me do it”.

Somewhat surprisingly, some legal theorists insist that determinism doesn’t change the way the law operates. Stephen J. Morse, an American expert in criminal responsibility and neuroscience, says that “free will… is not a criterion for any criminal law doctrine, and… it is not even foundational for criminal responsibility. Indeed, the law’s positive criteria for criminal responsibility are not inconsistent with the truth of determinism.” If this is the case, perhaps the law will change less than we think under the pressure of neuroscience.

However, this debate has been going on for 2500 years, and the independent existence of the mind – and of free will – still has robust champions. For instance Raymond Tallis, a British doctor and philosopher, argues that neurolaw is a worrying development. “Our knowledge of the relationship between brain and consciousness, brain and self, and brain and agency is so weak and so conceptually confused that the appeal to neuroscience in the law courts, the police station or anywhere else is premature and usually inappropriate. And, I would suggest, it will remain both premature and inappropriate. Neurolaw is just another branch of neuromythology,” he wrote in the London’s The Times.

At the moment, though, the determinists are the most prominent players in neurolaw. Centres for law and neuroscience are springing up all over the US in well-endowed universities.

Moreover, the brain scan lie detection business has plenty of potential clients outside the court system. No Lie MRI, for instance, says that it is potentially a “substitute for drug screenings, resume validation and security background checks” for corporations. Clients who once would have hired a private eye could use it for “risk reduction in dating, trust issues in interpersonal relationships and issues concerning the underlying topics of sex, power and money”.

And the spooky people whose business is secrets, the intelligence community, are also eyeing fMRI technology. One bioethicist has pointed out that the US Department of Defense helped to fund groundbreaking research in fMRI lie detection at the University of Pennsylvania. Jonathan Marks, of Pennsylvania State University, has already expressed his concern that interrogators will use brain scans as a means of selecting suspects for more aggressive interrogations.

“There is a profound risk,” he writes, “that that intelligence personnel will be seduced by the glamour of fMRI and its flashy images, and that they will overlook the limitations of the technology… the subjectivity of interpretation, and the complexity of brain function outside the realm of playing cards and controlled studies.”

There are so many potential hazards with the use and interpretation of this novel technology that Hank Greely, who has become one of the most enthusiastic advocates of neurolaw, wants government intervention. He argues that all non-research use of lie-detection technology should be banned until it has been approved by a government agency.

He envisages a process similar to drug approval by the US Food and Drug Administration. “We have seen lives shattered before, with and without these technologies,” he has written. “Requiring proof of safety and efficacy… is a careful step towards assuring that these technologies are used wisely.”

With Aditi Sharma’s fate in mind – a life sentence on the basis of a brain-imaging technology that had never been peer-reviewed or independently tested – government regulation might not be such a bad idea.

Michael Cook is editor of the bioethics newsletter BioEdge.