Australasian Science: Australia's authority on science since 1938

Publish, Patent, Be Social or Perish

By Guy Nolch

A researcher’s impact extends beyond measures of publications and citations to patents, peer review and social media influence.

In 1665 the Royal Society published the first journal in the world exclusively devoted to science. As scientific endeavours expanded it was necessary in 1887 to split Philosophical Transactions of the Royal Society into two publications serving physical sciences (A) and life sciences (B).

In 2015 the International Association of Scientific, Technical and Medical Publishers (STM) estimated that the number of active scientific journals had grown to 28,100. To this can be added the rise of “predatory” journals, with Jeffrey Beale of The University of Colorado cataloguing more than 1000 publishers producing “vanity” journals that charge a fee to authors whose work had not been accepted in peer-reviewed journals.

According to Nature there is “a doubling of global scientific output roughly every 9 years” ( Why is so much science being published?

“Publish or perish” has been a central part of career advancement in science, with scientific output measured by research publications and citations of papers published in (preferably) “high impact” journals. STM estimates that the number of scientists publishing their work is increasing by as much as 5% each year, and their research impact (measured through these basic metrics) has played a large role in their career progression and ability to attract research funding.

But how relevant are these measures in a digital age characterised by “fake news” in the mainstream media and the infiltration of predatory journals and conferences into academia? Should bibliometric measures also include other forms of influence, such as social media and patents? And how can early career scientists be compared with older researchers who have the advantage of more publications and citations to their names?

In this issue of Australasian Science, A/Prof Paul McCarthy of UNSW (p.37) has outlined the rise of alternative measures of research impact, and their growing use in the identification of young research talent. An example he cites is the appointment of Andre Geim to his first full professorship by The University of Manchester in 2001, despite having less than 1000 citations to his name. In 2010 Geim was a joint recipient of the Nobel Prize for Physics for graphene research he had published in 2004. McCarthy describes Geim’s appointment as “one of the most strategic hires in the past 20 years... In June 2016 the League of Scholars ranked the University first in the world with 46 Top 500-ranked graphene scholars”.

Alternative measures of research impact can also include social media influence, patents and peer review. For instance, Impactstory “provides one central place for authors to collect and display social media mentions of their work across Twitter, Wikipedia, Facebook and news articles,” while Publons “creates a unit of currency for the previously unrewarded job of reading and critiquing the work of other academics”.

The downside to all these metrics will be the additional hoops that researchers will need to jump through. Funding applications already take up a significant amount of a researcher’s time, and collating additional metrics will only add to the burden.

Guy Nolch is the Editor and Publisher of Australasian Science.