Australasian Science: Australia's authority on science since 1938

Science advice and policy making

By Robert M. May

Lord May examines the challenges facing tomorrow’s world: anthropogenic climate change; feeding more people; and designing a financial system that allocates capital in a responsible and effective way.

To borrow a phrase, we live in the Best of Times and the Worst of Times. This makes it particularly pleasing to see a resurgent Royal Society of New South Wales (RSN) playing a larger part in the communal life of the state.

It is the Best of Times in the sense that, thanks to our increasing understanding of how the natural world works, the average individual – in both developed and developing worlds – lives a longer and healthier life than ever before. Fifty years ago the average life expectancy on Earth was 46 years, whilst today it is 68 years. The counter-intuitive 46 year figure derives largely from the gap in life expectancy between the developed and developing worlds, which has shrunk from 26 years to a still disgraceful 12 years. Over the past 40 years, global food production has more than doubled, on only 10% more land; the continuing problems of malnourishment derive from inequitable distribution, a problem which has been with us since the dawn of agriculture.

The flip side of these advances is that population numbers continue to grow. Human numbers have trebled, to just on 7 billion, over the past 70 years. And although global average fertility rates are today roughly at replacement level, with the average woman having just less than one female child, the “momentum of population growth” is still carrying numbers upward toward around 9.8 billion by the middle of the century, as the currently pyramidal age- structure rounds out toward being more rectangular. Moreover, the ecological footprint stamped on the planet by the average individual’s requirement for energy, food, and other materials and resources continues its upward growth. Humanity’s overall ecological footprint today is around 50 times that at the publication of the Origin of Species, 150 years ago.

These problems could all, in principle, be solved. But such solutions require coordination and cooperation at the level of neighbourhoods and communities through to nation states. And there is little evidence, as yet, of willingness to acknowledge these needs for such cooperative activity.

The physical sciences are often called the “hard sciences”, which is a misnomer: with their conservation laws and invariance principles; the physical sciences are the easy ones. It is not surprising that they developed first. Although the physical sciences ultimately underpin the biological sciences, the complexity of the evolutionary processes, whereby Darwin’s “descent with modification” shaped the living world, makes for more difficult problems. Nevertheless, from molecular genetics to the structure and function of ecosystems, we have made great progress over the past half-century and more. The hardest problems, however, lie in the social sciences, which have all the complexity of the life sciences made yet more difficult by the fact that the subjects under study tend to react to being studied. This is especially unfortunate, because clearly the social sciences hold the key to solving our problems of collective action.

In what follows, I first sketch a subset of the challenges facing tomorrow’s world: anthropogenic climate change; feeding more people; designing a financial system that allocates capital in a responsible and effective way. Against this background, I focus on the role of science advice in policy making, indicating some ideal principles along with the difficulties that commonly arise in practice.

Climate Change
Over our planet’s half-billion year history, there are times when it may have been a ball of ice and snow (or something close to it: “slush-ball earth”), and other times when tropical animals roamed the poles. During most of humanity’s tenancy of our planet, ice-ages came and went. But ice-core records show that levels of carbon dioxide in the atmosphere were steady at around 280 parts per million (ppm), give or take 10 ppm, since the beginning of the first cities. Indeed some people, noting that the past 10 millennia have been unusually steady, have argued that the beginnings of agriculture and the subsequent development of cities and civilizations is a consequence, not a coincidence.

The Industrial Revolution is usually taken to have begun in the 1780s, after James Watt developed his steam engine. As the industrialising countries began burning up fossil fuels at ever-increasing rates after the 1780s, carbon dioxide levels rose. At first the rise was slow. Reaching 315 ppm took about a century and a half. The increase accelerated during the 20th century, reaching 330 ppm by the mid- 1970s, 360 ppm by the 1990s, and today is closing on 400 ppm. Such a change in the magnitude of the greenhouse gas blanket on this short time-scale has not been seen since the most recent ice-age ended, around 10,000 years ago. And we seem headed towards 500 ppm by 2050, approaching twice pre-industrial levels.

The long time lags that can be involved in these changes expressing themselves fully can be easily appreciated by physicists, but often seem counter-intuitive to others. Once in the atmosphere, the characteristic “residence” time of a carbon dioxide molecule is of the order of a century. And the time taken for the oceans to expand and come to equilibrium with a given level of greenhouse warming is several centuries. Deserving of emphasis is the fact that the last time Earth experienced greenhouse gas levels as high as 500 ppm was some 20-40 million years ago, when sea-levels were 100 m higher than today. Some have even argued that we should recognise that we are now entering a new geological epoch, the Anthropocene, which began around 1780. The InterGovernmental Panel on Climate Change (IPCC) has been consistently conservative in its predictions as to the extent to which global average temperatures would be raised by this thickening of the greenhouse gas blanket. Moreover, to those unfamiliar with the difference between daily temperature fluctuations and global average temperature changes, the suggestion that average temperatures may rise by 5°C or more by 2100 seems unworrying. But there is a huge difference between daily fluctuations and global averages sustained year on year – the difference in average global temperature between today and the last ice-age is only around 5°C.

The time-scales for some important non-linear processes involving climate change are admittedly uncertain. As ice-caps melt, surface reflectivity changes, causing more warming and faster melting. So the precise time-scale for ice- caps to disappear is unclear. As northern permafrost thaws, methane gas is released, which accelerates global warming. Increased freshwater run-off from glaciers in the Atlantic region will reduce the salinity of surface water, which in turn reduces its density. Such changes in marine salt balance have, in the past, affected the fluid dynamical processes which ultimately drive the Gulf Stream, switching it off on ten- year time-scales (this is, however, seen as unlikely within the next century or so).

In the UK, following his party’s election in 1997, Tony Blair’s speech to the Party Conference majored on climate change. In 2008 the UK Climate Change Act was passed. It commits the UK and devolved administration Governments to setting and meeting carbon budgets, and preparing for climate change. The legislation also established, as an independent statutory body, the Committee on Climate Change (CCC). The Climate Change Act requires that the CCC report annually to Parliament on progress in meeting the carbon budgets; the third report was published in June 2011. The CCC also has an adaptation subcommittee, which published its first report in July 2011.

Not surprisingly, there exists a climate change “denial lobby”, which is very well-funded and is also highly influential in some countries. Sadly, Australia appears to be one such country. This denial lobby has understandable similarities, in both attitudes and tactics, to the tobacco lobby that continues to deny smoking causes lung cancer. The book Merchants of Doubt (Oreskes and Conway (2010)) gives a well-documented account of the activities of, and techniques deployed by, these loose-knit groups of skilful (if unscrupulous) lobbyists allied with a few right-wing scientific ideologues. It is a category- error to call these people skeptics. In the early days, there was – as always in the early stages of scientific understanding – real need for skeptical scientific challenge. Even now, as noted above, there remain uncertainties about the timescales on which some important processes will operate. But, helped by computational power doubling every 18 months for the past several decades, it is not surprising that one recent study of top climate change scientists found “97%/98% agree on climate change”. A separate study shows this statistic simply reflects the gist of the refereed scientific literature. And in the UK, public opinion broadly lines up with the scientific consensus, with a recent professional poll finding 80% agreeing that climate change is real and serious.

Feeding Tomorrow’s World
The Green Revolution in agriculture, referred to in the opening section of this paper, has for the past several decades enabled food production roughly to keep pace with global population growth. There are, however, worrying signs that these advances are reaching a plateau. Furthermore, although the Green Revolution has been “green” in the sense of being increasingly effective in turning photons from the sun into food, it has been far from “green” in the sense of being environmentally friendly. What is needed – both to feed still growing populations and to do so in less environmentally damaging ways – is a “Doubly Green Revolution” (Conway (1997)). In short, we could not feed today’s population with yesterday’s agriculture, and it is doubtful whether we can feed tomorrow’s with today’s agriculture. The Green Revolution’s doubling of food production involved, amongst other things, massive inputs of fossil-fuel energy subsidised fertilisers; around the globe, more than half of all the atoms of nitrogen and of phosphorus in green plant material that grew last year came from artificial fertilisers, rather than the natural biogeochemical cycles that built the biosphere and which struggle to maintain it. The consequent impacts of habitat loss and other disturbing factors upon the diversity of plants and other animals with which we share our planet is only just beginning to be fully appreciated.

I share the view that the solution to this dilemma lies in using our remarkable advances in understanding the molecular machinery whereby plants and animals assemble themselves, to design crops that are optimally adapted to their environment, rather than – as at present – wrenching their environment to suit them by using fertilisers, herbicides, pesticides and other artificial interventions. Although today’s crops have undergone eons of genetic modification by selective plant breeding, so that only an expert can recognise their wild ancestors, there is currently much resistance to using today’s greater understanding of molecular genetics deliberately to produce crops with desirable characteristics. Indeed, the words “genetic modification”, or GM, have for many people become a term of derogation, in ways which make no scientific sense. There is admittedly some justification for this, in that – in contrast to the Green Revolution, which derived from public money and was focused on public benefits – the first applications of the new GM technology were funded privately (by firms such as Monsanto), and could be seen as primarily serving corporate interests.

By now, however, 25 years of research funded by a wide variety of organisations (including the EU, private charities, research councils, etc), have found no scientific evidence associating GM plants with higher risks for the environment or for food safety than conventional plants and organisms. This of course does not prove that GM methods are 100% safe (which is also true for any novel food) but makes it clear that there is no contrary evidence. North America, China, South America and India are actively exploiting the opportunities offered by GM, producing crops which raise yields in a sustainable way, increase resistance to diseases and pests, and are better adapted to environmental stresses such as drought or low temperatures. In 2010 there were something like 15 million farmers planting GM crops, covering well over one million square kilometres.

Europe, and the UK in particular, along with Australia have been among the leaders in developing this beneficial new technology. But opposition from well-intentioned, but woefully and wilfully uninformed, NGOs and other campaigners has so far hindered these countries – and their environments – from reaping the consequent benefits.

Optimising the Financial System
Recent events have strongly suggested that the financial system, upon which all economies depend, is not optimally designed. Of course, it never was “designed”, but has evolved over many centuries, guided by changing customs and beliefs, which have rarely (if ever) been grounded on evidence that would pass muster in the physical or biological sciences.

In particular, the broad regulatory framework set out in Basel I and Basel II focused on issues of minimising risk for individual banks. Here I am using the word “bank” as shorthand for a wide variety of financial institutions. It is now increasingly recognised that the diversification thus encouraged – essentially taking advantage of the statisticians’ Central Limit Theorem to spread risks more widely – was indeed sensible for each individual bank, viewed in isolation. But the consequence for the banking system as a whole was to diminish diversity, as banks became both more similar in their asset holdings and more densely interconnected. A series of papers and speeches by Andrew Haldane, the Executive Director of Systemic Risk at the Bank of England, sets this out clearly (Haldane (2009a, 2009b)).

Not surprisingly, much work is now focused on systemic risk, as distinct from risk to individual banks. The basic idea is to make it more difficult for the failure of any single bank to propagate throughout the banking network, producing cascades of collapse. For example, in the UK the distinguished economist Sir John Vickers is chairing an Independent Banking Commission, which will publish its recommendations in September 2011.

In broad terms, such bodies, whether in the USA, UK or EU, seem likely to recommend that all banks be required to keep larger capital reserves and/or other forms of liquidity than has recently been the case. Recognising the disproportionate influence of the biggest banks, which are akin to what epidemiologists call “superspreaders” of infection, there is also the strong suggestion that such banks hold relatively larger capital reserves; the contrary has been the recent practice. Other suggestions are that leverage levels be hauled back well below those of recent years, and that the magnitude of capital reserves be countercyclical (larger in boom times, lower in bust times, again to the contrary of the recent past).

All these suggestions are being fiercely resisted by the banking community. Chanting mantras about “invisible hands” and “perfect markets”, the bailed-out banks want to get back on their roller-coaster and ride it. We do well to remember Stiglitz’s maxim: “the reason the invisible hand is invisible is that it is not there”.

Essentially all the above activity focuses on systemic risk. Undoubtedly important though this is, my view is that equal attention should be given to the ingeniously complex financial instruments – Credit Default Swaps (CDS) and their kin – which precipitated the initial crisis (Haldane and May (2011)). In retrospect, it is clear that the theory which provided the basis for pricing these instruments was grossly flawed. And the Credit Rating Agencies were naïve, not to say extraordinarily incompetent, in not recognising this. Personally, I would like simply to forbid trading in instruments which were so complex as to defy intuitive understanding. On the other hand, I do recognise the difficulties here: who decides what is too complicated? Going one step further back to fundamentals, a very thoughtful essay by Benjamin Friedman (Distinguished Professor of Political Economy at Harvard University) (Friedman and Solow (2011)) poses the question: what is the basic role of financial markets in a free-enterprise economy? Friedman sees the task “to be one of allocating the economy’s scarce investment capital”. He notes that “the financial system also provides other services that are valuable. But I highlight the allocation of the economy’s capital because for all of the financial system’s other functions [here he gives examples] we have well-established alternative models”.

Having thus defined the task, he goes on to ask – in the light of recent events – whether the economy is indeed being well served by the financial system. His answer is a decisive “no”. First, with detailed examples, he notes that “assets were mispriced and resources, therefore, were badly allocated”. Second, he asks “how much it is costing us to operate this financial system that allocates our capital”. The facts are that thirty years ago, the cost of running the financial system “was 10% of all the profits earned in America”. Fifteen years ago, this had risen to somewhere between 20% and 25% of all such profits. And in the mid-noughties, before the crisis hit, “running the financial system took one-third of all profits earned on investment capital”. And this figure is an underestimate, because it excludes such items as the costs of property (essentially always on prime sites), not to mention lobbying.

In summary, Friedman goes deeper than addressing systemic risk, deeper than asking about the dodgy financial instruments that initiated systemic failure, to ask the truly fundamental question of how cost-effective is the present system for allocating capital. Science Advice and Policy Making In the foregoing, I have deliberately chosen three different areas, all controversial, where scientific issues intersect with policy choices. For climate change, we have unambiguous scientific understanding, which calls out for activity both to ameliorate and to adapt. This, however, conflicts both with some business interests, and also with some intransigent (often politically right-wing) opinions based on beliefs rather than facts. For tomorrow’s food and GM crops we again have conflict between established science and firmly-held opinion, but here the opposition is mainly from voluntary bodies such as NGOs, whose motives are generally well-intended (and often left-wing). For the financial services, the science (itself largely social science, but increasingly merging with recent physical-science led advances in “complex systems”) is by no means fully understood. But the clear, if less than definitive, changes being considered by regulators are being strongly resisted by the financial community, who are being extremely well-paid when things are going well, and where the costs of bad times fall not on them but on the tax- payer.

In these three varied examples, along with very many others, we face the question of how do we give science advice for policy making? Most importantly, recognising that we will never have unanimity even if the scientific understanding is very secure, how do we handle such questions in ways which will generate public confidence in the outcome?

My answer here is an outline (and in parts a plagiarism) of more detailed suggestions given in my 2002 and 2005 Presidential Addresses to the Royal Society (May (2002, 2007)), itself based on earlier experience as the Chief Scientific Adviser to the UK Government, 1995-2000. In principle, the answer is simple. Bring together the appropriate experts, consulting widely and deliberately seeking and considering dissenting opinions. Identify conflicts of interest, but do not necessarily use them as grounds to exclude individuals. And, above all, do all this openly. In many practical circumstances it is most important, yet most difficult, to separate the scientific facts and uncertainties – which must serve as a constraining background – from policy choices. In addition one should aim to assess the magnitude of risks, whenever possible, and to manage them proportionately. When real or perceived uncertainties remain, give people choices whenever possible (e.g., label GM food).

This relatively simple list of precepts was set out in the Protocols for Science Advice in Policy Making issued by John Major’s Government in 1996. They have subsequently been reviewed and reaffirmed by Tony Blair’s Government in 1997 and 2000, and Gordon Brown’s Government in 2006 (at each iteration, they have grown bulkier, but their essentials remain unchanged). Independent support for such rules also has been provided (with acknowledgement of the originating 1996 document) by the Phillips Inquiry into BSE in 2000 and the House of Lords Science and Technology Select Committee in 2000 (the Jenkin Report).

Enunciating an ideal process is one thing. Embedding it as standard operating procedure is another. Indeed, given the apparent need to reincarnate such protocols for science advice in policy making at regular intervals, I suspect similar rules were enunciated prior to 1996 and that I am guilty of being unaware of them!

Quite apart from this “embedding” issue, there are other major problems in implementing such guidelines for good practice.

For one thing, as noted in all three of the varied “case studies” above, the scientific facts are simply not relevant for many individuals and groups whose views are determined by fixed, unquestioning ideologies (religious beliefs, political doctrines, and so on). For such individuals, no observed fact or experimental result can ever prevail over the apodictic “truth” of a fixed belief or canonical revelation. Rather than engage with the scientific facts and uncertainties, such ideologues and extremists will pick and choose among them – or deliberately misrepresent them – in support of immutable beliefs. It is a category error to call such people “sceptics”.

In the tumult of voices that can arise in such disputes, the media – print, radio, TV – are often unhelpful, for two reasons. First, their primary aim, which is not at all unreasonable, is to get your attention – to be read, listened to, watched. Only secondarily do they aim to inform; indeed, they cannot hope to inform if they do not attract your attention. Second, the media’s praiseworthy desire for “balance” in reporting too often leads to presenting “two sides” as if reporting a soccer match. But this can, and often does, seriously misrepresent the state of the scientific evidence, where “one team” is the consensus view of the science community, and the “other team” is a tiny minority. For example, consider the debate about whether HIV causes AIDS. A research community in the order of 100,000 has by now established this as a fact. But a small travelling roadshow, including one Nobel Laureate, can still be assembled to deny it. And there are many other examples: from MMR vaccination in the UK to the essential reality of anthropogenic climate change (albeit with remaining uncertainties about the timescales and magnitudes of some nonlinear processes).

Furthermore, it is often difficult to make an accurate assessment of risks, and even when an accurate assessment can be made many people’s subjective assessment is very different from the objective facts. Such subjective attitudes can create their own reality and impede effective policy actions. Even a policy of “label and let the consumer choose” has its problems. For one thing, there is a question of individual risk versus collective risk (e.g., an individual may choose the health risk of smoking, but there remain associated risks of “passive smoking” for family or in public places; other examples abound, particularly in relation to vaccination policies and herd immunity). For another thing, there is the question of individuals making bad choices for dependent people, such as young children.

In all this, the job of science and scientists is to frame the debate clearly, making plain the possible benefits and costs – and the concomitant uncertainty. And making clear that cloud cuckooland is not a feasible choice. But when it comes to acting out the democratic drama of choice on the constraining stage thus set, science has no special voice. Scientists are just citizens on this stage along with others. The drama of choice is about values and beliefs, ultimately about what kind of world we want.

Such democratic choices, against a background framed by scientific facts and uncertainties, are hard enough. As emphasised above, it is more difficult when fundamentalist or other belief systems – or other motives more generally – seek to blur the distinction between constraining facts and democratic decisions. We should always keep in mind the cautionary tale of Indiana State, where in 1897 its Lower House voted to define the transcendental number π (the ratio of a circle’s circumference to its diameter) to be exactly 3.2 to make things easier for the construction industry; their Upper House saved embarrassment by vetoing the bill.

References
Oreskes, N. & Conway, E.M., (2010) Merchants of Doubt; Bloomsbury Press, London, UK.

Conway, G. (1997) The Doubly Green Revolution; Penguin Books, London, UK.

Haldane, A.G. (2009a) Rethinking the financial network. http://www.bankofengland.co.uk/publications/sp eeches/2009/speech389.pdf.

Haldane, A. G. (2009b) Banking on the state. http://www.bankofengland.co.uk/publications/sp eeches/2009/speech409.pdf.

Haldane, A.G. & May, R.M. (2011) Systemic risk in banking ecosystems. Nature, 469, 351-355.

Friedman, B.M. & Solow, R.M. (2011) The financial crisis and economic policy. Bulletin of the American Academy, Spring 2011, 36-44.

May, R.M. (2002) How to choose tomorrow, rather than just letting it happen (Royal Society Anniversary Address). Notes and Records of the Royal Society, 57, 117-132, 2003.

May, R.M. (2006) Threats to tomorrow’s world (2005 Royal Society Anniversary Address). Notes and Records of the Royal Society, 60, 109-130.

Lord May of Oxford is one of Australia’s most distinguished mathematicians. He holds a Professorship at Oxford University and is a Fellow of Merton College, Oxford. He was President of The Royal Society (2000- 2005) and before that was Chief Scientific Adviser to the UK Government and Head of the UK Office of Science and Technology (1995-2000). He had a key role in the application of chaos theory to theoretical ecology through the 1970s and 1980s. His many honours include: the Royal Swedish Academy’s Crafoord Prize; the Swiss-Italian Balzan Prize; the Japanese Blue Planet Prize and the Royal Society’s Copley Medal He is a Foreign Member of the US National Academy of Sciences and an Overseas Fellow of the Australian Academy of Sciences.

Lord May was presented with his Fellowship of the Royal Society of New South Wales at Government House on 29 April 2011 by the Society’s patron, the Governor of New South Wales, Her Excellency, Professor Marie Bashir.

Source: Journal and Proceedings of the Royal Society of New South Wales, vol. 144, nos. 3&4, pp. 50-57. Reproduced with permission.