Australasian Science: Australia's authority on science since 1938

History of an “Outcome”

By Hugh Possingham

Assigning an outcome to any single grant, paper or person makes a mockery of the scientific process.

Late last year our research network scored another “outcome”. What’s more, it came “gift-wrapped” in a front page story in the Sydney Morning Herald, making it easy for everyone to see.

What was that outcome? The NSW government is adopting a version of “conservation triage” that is based on our cost-effectiveness framework. The framework we developed is called Project Prioritisation Protocol (PPP), which basically involves ranking which projects for threatened species you’ll invest money in based on the cost, the likelihood of success and the benefit to the species. Not everyone is happy with the idea of conservation triage but it’s easy to demonstrate that the approach can generate enormous benefits for biodiversity conservation.

In any event, outcomes from our research are always welcome, and not the least now as our network (the National Environmental Research Program – NERP – Environmental Decision Hub) is being reviewed for future funding. It’s easy to point to outputs like meetings, workshops and science papers, but being able to say that something led to an outcome – a change in policy, a new approach to management, or the introduction of transparency to decision-making – is something special. It’s something that takes time.

“But wait a sec,” I hear you say. “What’s this an outcome of? What meeting, workshop or paper led to this? Was it NERP or ARC money that generated this? Or did this come from earlier funding.”

Maybe you wouldn’t have asked such specific questions but I’m often asked: “What are the outcomes from government funding?” And the honest answer is it’s impossible to attribute an outcome to a single paper, a single quantum of funding or a single person. A brief history of this particular outcome is a good case in point.

Applying a cost-effectiveness methodology to underpin conservation triage is not a new idea. I proposed it to the federal Environment Department back in 1999 (at which time there was little interest), and the concept had been mentioned many times prior to this. It is, after all, basic common sense. Indeed, in 1998 an economist named Weitzman had published a similar approach although I didn’t know that at the time.

We published a paper on the idea in 2002 and the New Zealand Department of Conservation started considering the approach in 2005 (led by the vision of senior scientist Richard Maloney). We then worked with some of their people to turn the concept into a workable procedure. This resulted in papers in 2008 (on conservation triage, led by Madeleine Bottrill) and 2009 (on PPP, led by Liana Joseph).

During this phase of its development, our group was receiving funding from the Commonwealth Environment Research Facility program (CERF) and the Australian Research Council. The thinking involved many people and much discussion by collaborators like Mick McCarthy, Tara Martin, Kerrie Wilson, Eve McDonald-Madden, Stephen Garnett, Mark Burgman and David Lindenmayer – to name just a few.

In 2010 Tasmania adopted conservation triage underpinned by cost-effectiveness, but unfortunately they had no money to implement it. At about the same time we started working with New South Wales on similar approaches.

In the past couple of years we have developed the protocol and associated software, and two new papers advancing the process are currently under review. These later stages of development have all been supported by NERP and CEED funding. Indeed the tool and thinking are being refined continuously.

Other states and other countries are also expressing interest in the approach. Even the Australian government is working with us on tailoring aspects of the process for national policy – 14 years after I originally broached the topic with them.

This all goes to show that assigning an outcome to any single grant or paper or person makes a mockery of the scientific process. It’s the accumulation of many discussions, papers, grants and meetings – usually over multiple funding and policy cycles. I’d suggest a good piece of research usually takes around 10–20 years before it starts producing outcomes of note.

Returning to the success of PPP, New Zealand has been doing it now for about 5 years. Richard Maloney says the early results indicate “a fantastic improvement” on how things were done before.

Now that’s a worthwhile outcome.

Hugh Possingham is the Director of the Environmental Decisions Hub of the National Environment Research Program. He is based at the University of Queensland.