Australasian Science: Australia's authority on science since 1938

One Eye on the Future

Credit: mitarart/Adobe

Credit: mitarart/Adobe

By John L. Bradshaw

The newly appreciated relevance of pupillary studies conducted in the 1960s provides a cautionary tale about the modern metrics used to evaluate which research projects should gain funding.

Recently Australia’s research scientists were asked to prepare an updated report on their productivity over the previous year. Increasingly scientists, by profession generators of new knowledge, are being squeezed into an alien, business-speak model of operation. It is now less a matter of scientific discovery for its own sake – providing us with further insights into the underlying nature of physical, biological, medical, chemical and mathematical matters – and more an attempt at a quantitative assessment of performance output in a world that is increasingly competitive for research funding and career advancement. Quality, significance and other aspects of academic excellence now play second fiddle to crude numerical measures and models.

Not so very long ago the rot set in when an academic’s number of published papers provided the basic metric. As a result, to gain kudos for our institution, we were even encouraged to aim in our writing for the “smallest publishable unit” by salami-slicing our limited findings into as many publishable papers as possible.

Consequently, ever more journals sprouted, mushroom- like in the fields of academia, to accommodate all the new papers appearing, generating boom times for commercial publishing houses as scientists struggled to force out yet another paper. Then, of course, the tide turned and we were now encouraged to aim as high as possible, submitting to the world’s best journals, which of course usually only accepted papers encompassing a substantial body of interconnected studies.

In a belated attempt to address quality in coherent accounts of a body of research, bean-counting attention then switched to encompass the relative reputations of the target journals, with the august publications such as Nature and Science at the top of the pecking order. Referees, approached to provide confidential assessments of the relevance, quality, thoroughness and importance of the burgeoning submissions, were swamped by the deluge. Anomalies arose with, for example, “Letters to the Editor” in Nature, typically shorter accounts with more extended findings cast instead in lengthier Articles. Such “Letters” were generally discarded by the bean-accountants as if they were merely “Correspondence”, as found in other less-prestigious journals, even though Letters had long existed in Nature alongside Articles and Correspondence. Indeed, many professional scientists would give their eyeteeth for a Letter in Nature.

Partly to address such anomalies, our bean-counting accountants then introduced a new factor into the industrial assessment equation: the number of citations attracted by each publication (book, book-chapter, journal article). From this was derived the h-index, a metric combining authors’ productivity and citation impact across their publishing history. The h-index is based upon the set of their most-cited papers and the number of citations they have received.

A problem with such attempts at quantifying the qualitative arises when authors regularly, exhaustively and often irrelevantly cite themselves to inflate their scores; or, even more darkly, papers are instead repeatedly cited by other authors as examples not of virtue but instead of bad design, false premises, error, or generally just not what to do.

I recently looked up my own Google h-index, and while I confess not to have been particularly disappointed in this competitive world, entries under the associated Google Scholar Citations listings revealed some disturbing anomalies. There were occasional instances of intrusions into my listings by someone else with exactly my publishing name; someone who, maybe aptly enough, was engaged in the otherwise highly respectable discipline of research into wastewater, drainage and sewage disposal. How many of my own papers have similarly, and mistakenly, ended up in my competitor’s waste­water listings?

All this is just another instance of the unfortunate efforts, by our governmental lords and masters, to direct funding into channels they believe may be more a matter of the public good, and where there may seem to be an immediate application to perceived societal needs. Such an unfortunate philosophy effectively views only applied science as socially useful, ignoring the fact that most great innovations, insights and discoveries spring from initially pure research. Even where pure research is given lip service, funding all too often depends upon routine testing of pre-existing hypotheses, rather than “what if?” discoveries per se. The latter are all too often denigrated as “fishing expeditions”or “stamp collecting”.

These states of affairs, and the view within universities nowadays that grant-winning is itself an output measure, an end in itself, rather than a means to the end of making new findings, tempt applicants to “play it safe” and apply for funding for work already safely completed but not yet reported, and where they know the outcomes and the associated success. If the grant is secured, the researchers may then feel free to “divert” the funds to what they had really wanted to do all along.

I did, however, make an interesting observation when monitoring the twice-weekly Google Scholar notifications of my own recently cited papers. Suddenly there appeared a burst of recent citations of papers I had actually published half a century or more ago. These were on my doctoral studies involving pupillometry – the measurement of pupil size and reactivity. Indeed, that field was practically virgin in those halcyon days, the only real precedent being Archimedes of Syracuse, a Leonardo-like polymath, engineer, mathematician and natural scientist born in 287 BC who developed a very elegant geometric way of determining pupillary diameter. At the time this had no obvious practical application.

I was a product of the British approach to “supervising” graduate studies; the candidates were expected to come up with their own ideas, build their own apparatus, run the experiments and analyse and interpret the findings largely on their own. My “supervisor”, an amiable and vague English gentleman, was surprised and pleased when I handed him an appropriately bound thesis. “How very interesting; what is it all about?” was his first comment, followed by: “I suppose you’ll be wanting examiners”. I did, and felt that this perhaps was where he might make a useful contribution. He did, as he “knew someone who owed him a favour”. The oral part of the process duly took place one afternoon in the local pub, where my role was basically to buy the drinks and discourse upon why I had settled upon this line of research, what I had done and found.

I glossed over 6 months of agony trying to think up something original, and having to find equipment from various army and war-surplus stores, and from my boyhood Mechano set, and the fact that an amorous encounter with a female cousin during a distant Christmas celebration had led me to observe that the pupils in her eyes had expanded and contracted in sync with the current emotional atmosphere. I was not then aware that 18th century ladies were known to instil certain herbal preparations into their eyes to provoke pupillary dilation, which implied a surging romantic interest in her male acquaintance.

My supervisor, on his fifth or sixth beer, offered perhaps the single most useful suggestion he could possibly have ever made to me when I bewailed the fact that by a month I had been pipped at the publishing post by an American pair, Khaneman and Beatty, with a very similar pioneering study to mine in Science. “Oh, that’s all right”, he said. “Maybe send a note to Nature. They are sure to find it interesting.” They did, and that became my very first publication in 1967 – in the world’s most prestigious journal. Decades later, Dan Khaneman jointly won a Nobel Prize for his work on how we make judgements, by gut instinct or by deliberation.

But why were my pupillary procedures and findings so significant then, and to be rapidly replicated and extended by many other researchers who jumped on the bandwagon, and then in the last year or so regain research currency, relevance and interest after a decades-long pause and slumber?

In the 1970s, the forensic lie-detector approach to studying autonomic reactivity (changes in blood pressure, heart rate, sweating responses) were employed as useful proxies for guilt, stress, cognitive load, interest and so on. But they were slow, gross and required specialist preparation, application and interpretation. On the other hand, the eyes are truly a window into the soul, offering instantaneous and accessible reflection of momentary changes in processing load. And while the pupillary technology was then no less daunting than the contemporary “lie-detecting” polygraphs, nowadays the procedure has been miniaturised to the point that it can be mounted on a pair of spectacles, picking up not just moment-to-moment pupillary changes but also blink rate (another useful measure) and fixational eye movements that reflect the current direction of attention.

Early pupillary studies by myself and other groups peaked shortly after mid-1970. Thereafter, with “proof-of-concept” largely established and demonstrated, they soon tapered away to near zero by about 1980.

Then, last year, I noted a sudden resurgence of interest through my recent Google Scholar citations. It was gratifying that my early studies, reported 50 or so years ago, were still relevant and worth citing, but why was there a resurgence of interest? Is science also subject to the changing and often recycling whims of fashion, tidal trends and cycles? Or is this an earthly analogue of the Eastern concept of changing cycles of rebirth?

Newly miniaturised and non-invasive technology certainly made matters easier and more attractive. More importantly, baselines, peaks, times to and from peak values, as borrowed from recent upper-limb studies of biomechanics and kinematics, may suddenly have revived interest in such a sensitive and rapidly responsive measure, compatible with and complementary to electrophysiological and (to a lesser extent) functional neuroimaging such as fMRI.

Moreover, new applications are being discovered, such as indexing the effects of fatigue, intoxication, medication, distraction and inattention on the safety of driving or operating sensitive or hazardous machinery. Pupillary procedures are also exquisitely sensitive in the study of the possible underpinnings or consequences of background light intensity, dark adaptation, retinal rivalry, classical conditioning and even hypnotic suggestion.

Might I ever have thought in 1967 that when I would be nearly 80 years old my first pupillary paper and its associated studies would suddenly revive from their long comatose hibernation into a renewed public awareness – and also attract the attentions of the new breed of bean counters? Moreover, had the bean counters themselves existed back then, would they have forecast that these pupillometric studies might decades later become a hot topic despite (or maybe even because of) the development and deployment of so many then-unimaginable technological capabilities?

My pupillometric studies have thus become a metaphor for the blinkering of research by short-sighted analytics that can’t foresee the focus of future scientific pursuits. Science is too unpredictable to have its vision narrowed by business-minded measurements of productivity.


John Bradshaw is Emeritus Professor of Neuropsychology at Monash University.