Australasian Science: Australia's authority on science since 1938

What’s the Evidence?

By Sue Ieraci

The terms “evidence-based” and “peer-reviewed” have become touchstones for reliability, but why should the views of peers count so much and what does “evidence-based” medicine really mean?

To paraphrase Socrates: “A wise person knows what he doesn’t know”. How many people bandying around the term “evidence-based practice” or “peer-reviewed research” know what these terms mean? How many of us can honestly claim to be able to critically evaluate clinical research?

Many detractors of modern scientific medicine believe that much medical practice is not ‘“science-based” because it is not supported by a specific randomised controlled trial. That’s nonsense. A therapy can be science-based because it relies on the clinical sciences – anatomy, physiology, pathology and pharmacology. For example, draining pus from a wound is science-based, as is reducing a dislocated joint or slowing atrial fibrillation.

Furthermore, different research methods suit different interventions. While evidence from randomised controlled trials is recognised as high-grade because it eliminates various biases, it is not suitable for population studies. Basic laboratory science might demand precise measurement or observation – not necessarily randomisation or blinding.

Many people in the sciences understand that the publication of research is aimed at an informed audience who read journals specific to their practice or expertise. We understand that journal “peer review” does not validate the paper nor its findings – it is a test of suitability for publication in the journal. Practical peer review occurs after publication – with exposure of the work to the authors’ true peers.

This is where knowledge and skill come into play. To evaluate the quality of research, and therefore the validity of the results, the reader needs the skills of critical evaluation, not just an understanding of methods and statistics – although both are necessary. It needs a fundamental understanding of the area of practice, and of its underlying principles, including previous research.

Guides to critical review of research stress the importance of identifying the research question and assessing whether or not it is valid and answerable by the methodology used. They describe understanding the way the subjects or measurements were chosen and which were discarded (inclusion and exclusion criteria). They emphasise the importance of identifying potential sources of (non-intentional) bias – more complex and nuanced than obvious sources like funding.

Does a positive statistic translate into benefit for a particular patient? What is the number needed to treat before a single person benefits? Are the outcomes test-based or patient-based? Will the benefit outweigh the risk? Will it be cost-effective? Is it better than current practice?

For clinicians not constantly immersed in research, maintaining critical evaluation skills is hard work. It means journal clubs, workshops, conferences and courses. It also requires scepticism and good judgement when manufacturers promote their own products.

Just having “a good mind and a knowledge of English” is not enough. Anyone can now access, on the internet, abstracts of an enormous range of published papers. Readers might not be an “informed audience”, and their access might be to only the title and abstract – making it is impossible for them, even if they had the skills, to critically review the paper. Many, however, are not constrained by this lack of insight.

Frustratingly, in these post-modernist times, my comments will be seen as “elitist”. Heaven forbid that a professional should claim expertise in their area of training and experience. In today’s world, where all opinions are equally valid and expertise scorned, the lowest common denominator prevails. The blogger, the tabloid opinion-writer and the candlestick-maker all have opinions. They have done their research.

Anyone been to a journal club lately?

Sue Ieraci is an emergency physician who has worked in NSW public hospitals for 30 years. While maintaining a continuous clinical career, she has held roles in management and medical regulation, and been involved in health systems research.