by Glen Wright
The K-Index compares the number of followers an academic has on Twitter with the number of citations to their peer-reviewed work. Those with a high ratio of followers to citations (a K-index > 5), are labelled ‘Kardashians’. A high K-index is, Hall says, a warning to the academic community that a researcher may have ‘built their public profile on shaky foundations’, while a low K-index suggests that a scientist is being undervalued.
Hall’s paper is funny and worth a read. However, as a big believer in the value of social media, especially for early career researchers, I can’t help but feel that Hall might be ‘punching down’ at those of us with less established careers than his. Either that, or Hall simply shares a misapprehension of social media common among established scholars.*
Neuroscientist Micah Allen writes:14
We (the Kardashians) are democratizing science. We are filtering the literally unending deluge of papers to try and find the most outrageous, the most interesting, and the most forgotten, so that they can see the light of day beyond wherever they were published and forgotten . . . Wear your Kardashian index with pride …
This is far from the only use for social media, but as someone that spends an inordinate amount of time seeking out outrageous, interesting and forgotten papers, I strongly sympathise with this sentiment.
The last word comes from another Hall, Nathan Hall of McGill University and of Shit Academics Say fame (see page 162). He neatly sums up the tension between the social media savvy scholars and the old guard:
Perhaps the most interesting thing about academics and social media is that the most traditionally influential feel above it, leaving almost completely unattended a massive lane of influence for those not asleep at the wheel.
ALTERNATE SCIENCE METRICS
Merely hours after Hall’s paper on the K-Index was published, a hashtag was born to parody it.15 Under the banner of #AlternateScienceMetrics, the academic Twittersphere created hundreds of joke impact measures that saw a range of fictional characters, books, and films turned into elaborate metaphors for academic publishing.
The Kanye Index =
# self-citations ÷ total citations16
Just as Kanye thinks he’s the greatest rock star alive,* plenty of academics seem to love themselves a touch too much. The Kanye Index measures the level of self-citation in an author’s work.
The Priorities Index =
# dead house plants (HP) ÷ (total HP + total publications)*17
Academics are often working so hard that they neglect everything else, from house plants to relationships. Calculating your Priorities Index might just help you get some perspective.
The Minion Index =
# papers you do all the work for, but end up as nth author (where n is > 1)18
The Minion Index will likely appeal to PhD students and postdocs, who are frequently required to slog away on papers only to place 2nd or 3rd (or 9th) place on the author list.
The Bechdel Index =
# papers with >2 female co-authors19
The Bechdel Test was originally proposed, albeit as a bit of sarcasm in a cartoon strip, to highlight the lack of films that feature women as people.† The test could feasibly be used to highlight academia’s yawning gender gap.
The Adam Sandler Index =
# identical papers published with different titles20
Another classic technique in academia: repackaging something you already published as something all new and shiny for submission to another journal (much like the unending stream of tediously unfunny Adam Sandler films).
The Dawkins Index:
# times quoted in internet arguments ÷ total publications21
The Dawkins Index identifies those whose quotes and witticisms have begun to overshadow their original academic work.*
SELF-CITATION
If impact is like sex, then self-citation is . . . an inevitable and healthy part of academic writing, in moderation. But excessive self-citation, while unlikely to cause blindness, can make you look crass and unprofessional.
Cyril Labbé, identified earlier as the cataloguer of published SCIgen papers, has also shown how easy it is to artificially inflate your academic ego using the internet. He invented an academic persona ‘Ike Antkare’ and generated a hundred papers, all citing each other. In this way, Antkare managed to garner a highly impressive h-index of 94 (lower than Freud, but higher than Einstein).22
A small number of academics, for whom collecting citations and massaging their ego via impact has become something of an obsession, have been using similar techniques to ensure that their numbers are ever- increasing.
I found the Google Scholar page of one young and celebrated professor bursting with 6,000 citations, almost all of them self-citations. The most incredible examples are the contributions of the professor’s team to conferences. In one year alone the research group published six papers at a single conference, with the number of self-citations in each ranging from 25–40, totalling 150 citations out of one conference. Not bad for a day’s work. In one of these papers the authors self-cite over 20 papers in the first footnote.
Another example of impact inflation was brought to my attention by Jason McDermott (the awesome artist behind the cartoons in this book). He was searching gene names in a database and started to notice a pattern: a string of publications characterising different genes looked suspiciously similar. Their titles were essentially the same, just substituting the relevant gene name each time, all had at least two core authors, and most were published in a handful of journals with relatively low impact factors. Many of the papers were rehashed digests of information obtained from existing databases, combined with some basic information about potential applications in cancer or biomedicine. The main author of these papers has published 99 in the International Journal of Oncology, with the self-citations generating an h-index of 48. There are also 99 papers in the International Journal of Molecular Medicine, with an only slightly less impressive h-index of 37. A combined search for the three core authors retrieved 216 publications with a combined h-index of 56, a number that would make any academic proud.
While excessive self-citation is routinely denounced, female academics may be failing to win chairs because they do not cite themselves enough.* Barbara Walter, of the University of California, San Diego, argues that female scholars do not cite their own previous work as much as male colleagues. This diminishes their perceived importance and prejudices them when it comes to decisions on top-level positions. To test her hypothesis, Walter and her team reviewed around 3,000 articles in the top 12 peer-reviewed political science journals. While any given publication was cited an average of 25 times, those with an all-male author list garnered an average of five more citations than those with an all-female list.♀ Walter has not yet figured out why this is, though anecdotal evidence suggests that female academics tend to look unfavourably on self-promotion (and studies regarding self-promotion more generally seem to support this).*
IN A JIF
‘Like nuclear energy, the impact factor is a mixed blessing.’
Eugene Garfield
Journals like to show they have an impact too, and for this we have the Journal Impact Factor (JIF) which counts the average number of citations made to papers published by a given journal.23 Eugene Garfield, who is regarded as the father of bibliometrics, first mentioned the idea of a JIF in Science in 1955, and originally calculated them manually by noting all citations made that year in a (presumably huge) notebook.†
Thomson Reuters subsequently managed to get a monopoly on JIFs, and once a year the world of academic publishing waits with baited breath to see who’s who. The rest of us look on and try to pretend that we don’t care‡ and that impact factors don’t mean anything anyway.§
On calculating the impact factor for a given journal, C&EN Onion jokes that:24
The current standard impact factor model used by scientists relies on the International Impact Factor Prototype (IIFP), a physical copy of the latest issue
of the New England Journal of Medicine, stored in a climate-controlled vault under armed guard – defined as precisely 55.87(3) IF.
Just as authors are occasionally overzealous in citing their own work, some journals have engaged in masturbatory self-referencing to bulk up their numbers. In his 1999 essay ‘Scientific Communication – A Vanity Fair’ Georg Franck warned that obsessive citation-counting could result in editors pushing authors to manipulate their counts by requiring citations to the journal as a prerequisite publication. Years later this fear is becoming a reality, at least in certain corners of academic publishing.
In one survey of almost 7,000 researchers, one in five said that editors had asked them to increase citations to their journal, without pointing to any specific or relevant papers, or suggesting that the manuscript was lacking.25
This is bad form. I’ve even seen an ‘instructions for authors’ page that told authors to cite articles from the journal, subscribe to it, and encourage their colleagues and institutions to do the same. Another journal published an annual review article citing every single paper published in the preceding 12 months, thus ensuring that each paper had at least one additional citation for that year.
While shifty strategies may work for a while, Thompson de-lists journals with unhealthy self-citation rates. For example, the World Journal of Gastroenterology received its first impact factor in 2000, pegged at a modest 0.993. A year later it was up to 1.445 and by 2003 it was at 3.318. The journal’s success was being fuelled by self-citations, which accounted for over 90% of its total, and it was subsequently de-listed. It was re-listed in 2008, this time with a more muted impact factor of 2.081 (comprising just 8% self-citations).26
Over 50 journals were removed from the list in 2011 for extreme self-citation, including Cereal Research Communications, which had a 96% self-citation rate. It’s enough to make you choke on your Cheerios.
Notes
For the love of trees, I have opted to keep this bibliography (relatively) short. For more details, please go to AcademiaObscura.com/buffalo, where I plan to concoct a multimedia extravaganza containing links, photos, and videos. If I get distracted and don’t get around to doing this (highly likely), I will at the very least provide full references and PDFs (where I can do so legally).
* Might have overstretched the metaphor there.
† In fact, this is a common misquote of a passage from W. Edwards Deming’s 1993 book The New Economics. What Deming actually said is: ‘It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth’.
‡ But because citation analysis is complex and because any statistical analysis always depends to some extent on how you cut the data, we don’t really know the exact figures.
* Hirsch suggests that in physics an h-index of around 12 may be typical for getting tenure as an associate professor at a major research university
† Despite frequent reproduction, it appears that Einstein never actually said this. The phrase instead appears to come from William Bruce Cameron’s 1963 book Informal Sociology: A Casual Introduction to Sociological Thinking, wherein he states: ‘It would be nice if all of the data which sociologists require could be enumerated because then we could run them through IBM machines and draw charts as the economists do. However, not everything that can be counted counts, and not everything that counts can be counted.’
* Despite including some elements of societal impact and outreach, the REF remains a heavily citation-focused process. My good friend Dr David Hayes described it to me as follows: ‘It’s a rather large-scale quality-measuring exercise for the research outputs of British academics (so as you can imagine most everyone hates it because you can’t measure quality, etc. etc. etc.) which, in practical terms, dictates things like promotions, availability of academic jobs, and the amount of money universities have to throw around. Under the last REF in 2014, Universities had to nominate a selection of research staff who would each submit four pieces of research (but that was used by many institutions to cherry-pick its best and brightest and thereby massage the figures). The REF sets out criteria for grading the papers: 4* = “world-leading”; 3* = “internationally excellent”; 2* = “nationally excellent” and 1* = I forget the euphemism, but shit. Accepted wisdom is that the best pieces for REF submission will have to fall into the 3*–4* range to be competitive. And then we get into league tables and all that poisonous bollocks.’
* Join the club.
* Aaron was the baseball player who broke Babe Ruth’s home run record.
* I wouldn’t have written this book if Twitter wasn’t great for fooling around and procrastinating. But I’ve also used it to build a network of academics in my field, get access to paywalled papers, seek support and mentorship, find co-authors, and get feedback on my work.
* He’s not.
* I particularly like this one as I have a terrible record with houseplants. I was once gifted a houseplant called ‘Thrives on Neglect’, which I neglected to death in a few short weeks.
† Alison Bechdel’s comic strip Dykes to Watch Out For (1985). The Bechdel Test as originally conceived simply requires that a work of fiction feature at least two women who talk to each other about something other than a man. Incredibly, only about half of all films pass the test.
* Though these days it is Richard Dawkins’s own social media missteps that have begun to overshadow his original work, and the (in)famous evolutionary biologist has experienced something of a fall from grace due to his propensity to send cringeworthy tweets to his 2 million followers. Dawkins has unhelpfully weighed in on the controversy surrounding Ahmed Mohamed (the young Muslim student whose home-made clock was mistaken for a bomb), suggested that some rapes are not as bad as others, and accidentally (ironically? surreptitiously?) posted a QR code with a link to a racist website in it.
* Chairs in this context refers to the highly sought after academic position – there is no academic contest to win physical chairs (yet).
* It is always possible to find exceptions that more or less prove the rule. One high-profile case of a female scientist firmly shuns the trend: inflated stats were the shaky foundation for her career, which crumbled when she later committed scientific misconduct and embezzlement. Over half of her 4,000-plus citations were self-citations.
† In a similar fashion, early bibliometric scholar Derek de Solla Price manually noted all the citations from the Philosophical Transactions of the Royal Society to track the exponential growth in scientific publishing. He published the seminal book Little Science, Big Science (1963) based on this work. As with many landmark works, this came about by accident – when he arrived in Singapore to do a postdoc, the library was not yet functional and a full set of Transactions was one of the few complete resources available.
‡ Even though we do a little bit.
§ Even though they do a little bit.
1 For a useful overview, see: Remler, ‘Are 90% of Academic Papers Really Never Cited? Reviewing the Literature on Academic Citations’ (2014) LSE Impact of Social Sciences Blog; Barnes, ‘Why Humanities Citation Statistics Are like Eskimo Words for Snow’ (2016).
2 Remler, ‘How Few Papers Ever Get Cited? It’s Bad, But Not THAT Bad’ (2014) Social Science Space
3 ‘1612 Highly Cited Researchers according to Their Google Scholar Citations Public Profiles’ (2017) Ranking Web of Universities.
4 ‘Changing Perceptions of Diabetes through Stand-Up Comedy’ (2015) REF2014 Impact Case Studies.
5 ‘The Management and Governance of Land to Enhance African Livelihoods’ (2015) REF2014 Impact Case Studies.
6 Lemonick, ‘Paul Erdős: The Oddball’s Oddball’ (1999) Time.
7 Goffman, ‘And What Is Your Erdős Number?’ (1969) American Mathematical Monthly.
8 Grossman, ‘Facts about Erdős Numbers and the Collaboration Graph’, The Erdős Number Project.
9 Cohen, ‘Yet More Chutzpah in Assigning Erdős Numbers’, The Erdős Number Project Extended.
10 Grossman, ‘Items of Interest Related to Erdős Numbers’ (2014) The Erdős Number Project.
11 ‘TIL That When Physics Professor Jack H. Hetherington Learned He Couldn’t Be the Sole Author on a Paper. (Because He Used Words Like “we” & “our”) Rather than Rewriting the Paper He Added His Cat as an Author’ (2014) Reddit.
12 Cohen, ‘Yet More Chutzpah in Assigning Erdös Numbers’, The Erdős Number Project Extended.
13 Hall, ‘The Kardashian Index: A Measure of Discrepant Social Media Profile for Scientists’ (2014) Genome Biology.
14 Allen, ‘We the Kardashians Are Democratizing Science’ (2014) Neuroconscience.
15 I believe this was started by Alex Wild (@Myrmecos).
16 Jason McDermott (@BioDataGanache).
17 Nick Wan (@nickwan).
18 Jon Tennant (@Protohedgehog).
19 Blake Stacey (@blakestacey).
20 Jon Tennant (@Protohedgehog).
21 Diana Crow (@CatalyticRxn)
22 Labbé, ‘Ike Antkare One of the Great Stars in the Scientific Firmament’ (2010) International Society for Scientometrics and Informetrics Newsletter.
23 Garfield, ‘The Agony and the Ecstasy: The History and Meaning of the Journal Impact Factor’ (2005) International Congress on Peer Review And Biomedical Publication.
24 ‘Scientific Community Debates Standard Definition of “Impact Factor”’ (2016) C&EN Onion.
25 Wilwhite and Fong, ‘Coercive Citation in Academuc Publishing (2010) Science.
26 Davis, ‘Gaming the Impact Factor Puts Journal in Time-Out’ (2011) Scholarly Kitchen.