We teach brilliance bias to children from an early age. A recent US study found that when girls start primary school at the age of five, they are as likely as five-year-old boys to think women could be ‘really really smart’.43 But by the time they turn six, something changes. They start doubting their gender. So much so, in fact, that they start limiting themselves: if a game is presented to them as intended for ‘children who are really, really smart’, five-year-old girls are as likely to want to play it as boys – but six-year-old girls are suddenly uninterested. Schools are teaching little girls that brilliance doesn’t belong to them. No wonder that by the time they’re filling out university evaluation forms, students are primed to see their female teachers as less qualified.
Schools are also teaching brilliance bias to boys. As we saw in the introduction, following decades of ‘draw a scientist’ studies where children overwhelmingly drew men, a recent ‘draw a scientist’ meta-analysis was celebrated across the media as showing that finally we were becoming less sexist.44 Where in the 1960s only 1% of children drew female scientists, 28% do now. This is of course an improvement, but it is still far off reality. In the UK, women actually outnumber men in a huge range of science degrees: 86% of those studying polymers, 57% of those studying genetics, and 56% of those studying microbiology are female.45
And in any case, the results are actually more complicated than the headlines suggest and still provide damning evidence that data gaps in school curriculums are teaching children biases. When children start school they draw roughly equal percentages of male and female scientists, averaged out across boys and girls. By the time children are seven or eight, male scientists significantly outnumber female scientists. By the age of fourteen, children are drawing four times as many male scientists as female scientists. So although more female scientists are being drawn, much of the increase has been in younger children before the education system teaches them data-gap-informed gender biases.
There was also a significant gender difference in the change. Between 1985-2016, the average percentage of female scientists drawn by girls rose from 33% to 58%. The respective figures for boys were 2.4% and 13%. This discrepancy may shed some light on the finding of a 2016 study which found that while female students ranked their peers according to actual ability, male biology students consistently ranked their fellow male students as more intelligent than better-performing female students.46 Brilliance bias is one hell of a drug. And it doesn’t only lead to students mis-evaluating their teachers or each other: there is also evidence that teachers are mis-evaluating their students.
Several studies conducted over the past decade or so show that letters of recommendation are another seemingly gender-neutral part of a hiring process that is in fact anything but.47 One US study found that female candidates are described with more communal (warm; kind; nurturing) and less active (ambitious; self-confident) language than men. And having communal characteristics included in your letter of recommendation makes it less likely that you will get the job,48 particularly if you’re a woman: while ‘team-player’ is taken as a leadership quality in men, for women the term ‘can make a woman seem like a follower’.49 Letters of recommendation for women have also been found to emphasise teaching (lower status) over research (higher status);50 to include more terms that raise doubt (hedges; faint praise);51 and to be less likely to include standout adjectives like ‘remarkable’ and ‘outstanding’. Women were more often described with ‘grindstone’ terms like ‘hard-working’.
There is a data gap at the heart of universities using teaching evaluations and letters of recommendation as if they are gender neutral in effect as well as in application, although like the meritocracy data gap more broadly, it is not a gap that arises from a lack of data so much as a refusal to engage with it. Despite all the evidence, letters of recommendation and teaching evaluations continue to be heavily weighted and used widely in hiring, promoting and firing, as if they are objective tests of worth.52 In the UK, student evaluations are set to become even more important, when the Teaching Excellence Framework (TEF) is introduced in 2020. The TEF will be used to determine how much funding a university can receive, and the National Students Survey will be considered ‘a key metric of teaching success’. Women can expect to be penalised heavily in this Excellent Teaching new world.
The lack of meritocracy in academia is a problem that should concern all of us if we care about the quality of the research that comes out of the academy, because studies show that female academics are more likely than men to challenge male-default analysis in their work.53 This means that the more women who are publishing, the faster the gender data gap in research will close. And we should care about the quality of academic research. This is not an esoteric question, relevant only to those who inhabit the ivory towers. The research produced by the academy has a significant impact on government policy, on medical practice, on occupational health legislation. The research produced by the academy has a direct impact on all of our lives. It matters that women are not forgotten here.
Given the evidence that children learn brilliance bias at school, it should be fairly easy to stop teaching them this. And in fact a recent study found that female students perform better in science when the images in their textbooks include female scientists.54 So to stop teaching girls that brilliance doesn’t belong to them, we just need to stop misrepresenting women. Easy.
It’s much harder to correct for brilliance bias once it’s already been learnt, however, and once children who’ve been taught it grow up and enter the world of work, they often start perpetuating it themselves. This is bad enough when it comes to human-on-human recruitment, but with the rise of algorithm-driven recruiting the problem is set to get worse, because there is every reason to suspect that this bias is being unwittingly hardwired into the very code to which we’re outsourcing our decision-making.
In 1984 American tech journalist Steven Levy published his bestselling book Hackers: Heroes of the Computer Revolution. Levy’s heroes were all brilliant. They were all single-minded. They were all men. They also didn’t get laid much. ‘You would hack, and you would live by the Hacker Ethic, and you knew that horribly inefficient and wasteful things like women burned too many cycles, occupied too much memory space,’ Levy explained. ‘Women, even today, are considered grossly unpredictable,’ one of his heroes told him. ‘How can a [default male] hacker tolerate such an imperfect being?’
Two paragraphs after having reported such blatant misogyny, Levy nevertheless found himself at a loss to explain why this culture was more or less ‘exclusively male’. ‘The sad fact was that there never was a star-quality female hacker’, he wrote. ‘No one knows why.’ I don’t know, Steve, we can probably take a wild guess.
By failing to make the obvious connection between an openly misogynistic culture and the mysterious lack of women, Levy contributed to the myth of innately talented hackers being implicitly male. And, today, it’s hard to think of a profession more in thrall to brilliance bias than computer science. ‘Where are the girls that love to program?’ asked a high-school teacher who took part in a summer programme for advanced-placement computer-science teachers at Carnegie Mellon; ‘I have any number of boys who really really love computers,’ he mused.55 ‘Several parents have told me their sons would be on the computer programming all night if they could. I have yet to run into a girl like that.’
This may be true, but as one of his fellow teachers pointed out, failing to exhibit this behaviour doesn’t mean that his female students don’t love computer science. Recalling her own student experience, she explained how she ‘fell in love’ with programming when she took her first course in college. But she didn’t stay up all night, or even spend a majority of her time programming. ‘Staying up all night doing something is a sign of single-mindedness and possibly immaturity as well as love for the subject. The girls may show their love for computers and computer science very differently. If you are looking for this type of obsessive behavior
, then you are looking for a typically young, male behavior. While some girls will exhibit it, most won’t.’
Beyond its failure to account for female socialisation (girls are penalised for being antisocial in a way boys aren’t), the odd thing about framing an aptitude for computer science around typically male behaviour is that coding was originally seen as a woman’s game. In fact, women were the original ‘computers’, doing complex maths problems by hand for the military before the machine that took their name replaced them.56
Even after they were replaced by a machine, it took years before they were replaced by men. ENIAC, the world’s first fully functional digital computer, was unveiled in 1946, having been programmed by six women.57 During the 1940s and 50s, women remained the dominant sex in programming,58 and in 1967 Cosmopolitan magazine published ‘The Computer Girls’, an article encouraging women into programming.59 ‘It’s just like planning a dinner,’ explained computing pioneer Grace Hopper. ‘You have to plan ahead and schedule everything so that it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.’
But it was in fact around this time that employers were starting to realise that programming was not the low-skilled clerical job they had once thought. It wasn’t like typing or feeling. It required advanced problem-solving skills. And, brilliance bias being more powerful than objective reality (given women were already doing the programming, they clearly had these skills) industry leaders started training men. And then they developed hiring tools that seemed objective, but were actually covertly biased against women. Rather like the teaching evaluations in use in universities today, these tests have been criticised as telling employers ‘less about an applicant’s suitability for the job than his or her possession of frequently stereotyped characteristics’.60 It’s hard to know whether these hiring tools were developed as a result of a gender data gap (not realising that the characteristics they were looking for were male-biased) or a result of direct discrimination, but what is undeniable is that they were biased towards men.
Multiple-choice aptitude tests which required ‘little nuance or context-specific problem solving’ focused instead on the kind of mathematical trivia that even then industry leaders were seeing as increasingly irrelevant to programming. What they were mainly good at testing was the type of maths skills men were, at the time, more likely to have studied at school. They also were quite good at testing how well networked an applicant was: the answers were frequently available through all-male networks like college fraternities and Elks lodges (a US-based fraternal order).61
Personality profiles formalised the programmer stereotype nodded to by the computer-science teacher at the Carnegie Mellon programme: the geeky loner with poor social and hygiene skills. A widely quoted 1967 psychological paper had identified a ‘disinterest in people’ and a dislike of ‘activities involving close personal interaction’ as a ‘striking characteristic of programmers’.62 As a result, companies sought these people out, they became the top programmers of their generation, and the psychological profile became a self-fulfilling prophecy.
This being the case, it should not surprise us to find this kind of hidden bias enjoying a resurgence today courtesy of the secretive algorithms that have become increasingly involved in the hiring process. Writing for the Guardian, Cathy O’Neil, the American data scientist and author of Weapons of Math Destruction, explains how online tech-hiring platform Gild (which has now been bought and brought in-house by investment firm Citadel63) enables employers to go well beyond a job applicant’s CV, by combing through their ‘social data’.64 That is, the trace they leave behind them online. This data is used to rank candidates by ‘social capital’ which basically refers to how integral a programmer is to the digital community. This can be measured through how much time they spend sharing and developing code on development platforms like GitHub or Stack Overflow. But the mountains of data Gild sifts through also reveal other patterns.
For example, according to Gild’s data, frequenting a particular Japanese manga site is a ‘solid predictor of strong coding’.65 Programmers who visit this site therefore receive higher scores. Which all sounds very exciting, but as O’Neil points out, awarding marks for this rings immediate alarm bells for anyone who cares about diversity. Women, who as we have seen do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online. O’Neil also points out that ‘if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of the women in the industry will probably avoid it’. In short, Gild seems to be something like the algorithm form of the male computer-science teacher from the Carnegie programme.
Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases. But if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices. And so by not considering the ways in which women’s lives differ from men’s, both on and offline, Gild’s coders inadvertently created an algorithm with a hidden bias against women.
But that’s not even the most troubling bit. The most troubling bit is that we have no idea how bad the problem actually is. Most algorithms of this kind are kept secret and protected as proprietary code. This means that we don’t know how these decisions are being made and what biases they are hiding. The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us. This, therefore, is a double gender data gap: first in the knowledge of the coders designing the algorithm, and second, in the knowledge of society at large, about just how discriminatory these AIs are.
Employment procedures that are unwittingly biased towards men are an issue in promotion as well as hiring. A classic example comes from Google, where women weren’t nominating themselves for promotion at the same rate as men. This is unsurprising: women are conditioned to be modest, and are penalised when they step outside of this prescribed gender norm.66 But Google was surprised. And, to do them credit, they set about trying to fix it. Unfortunately the way they went about fixing it was quintessential male-default thinking.
It’s not clear whether Google didn’t have or didn’t care about the data on the cultural expectations that are imposed on women, but either way, their solution was not to fix the male-biased system: it was to fix the women. Senior women at Google started hosting workshops ‘to encourage women to nominate themselves’, Laszlo Bock, head of people operations, told the New York Times in 2012.67 In other words, they held workshops to encourage women to be more like men. But why should we accept that the way men do things, the way men see themselves, is the correct way? Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people.68 This being the case, perhaps it wasn’t that women’s rates of putting themselves up for promotion were too low. Perhaps it was that men’s were too high.
Bock claimed Google’s workshops as a success (he told the New York Times that women are now promoted proportionally to men), but if that is the case, why the reluctance to provide the data to prove it? When the US Department of Labor conducted an analysis of Google’s pay practices in 2017 it found ‘systemic compensation disparities against women pretty much across the entire workforce’, with ‘six to seven standard deviations between pay for men and women in nearly every job category’.69 Google has since repeatedly refused to hand over fuller pay data to the Labor Department, fighting in court for months to avoid the demand. There was no pay imbalance, they insisted.
For a company built almost entirely on data, Google’s reluctance to engage here may seem surprising. It shouldn’t be. Software engineer Tracy Chou has been investigating the number of female engineers in the US tech industry since 2013 and
has found that ‘[e]very company has some way of hiding or muddling the data’.70 They also don’t seem interested in measuring whether or not their ‘initiatives to make the work environment more female-friendly, or to encourage more women to go into or stay in computing’, are actually successful. There’s ‘no way of judging whether they’re successful or worth mimicking, because there are no success metrics attached to any of them’, explains Chou. And the result is that ‘nobody is having honest conversations about the issue’.
It’s not entirely clear why the tech industry is so afraid of sex-disaggregated employment data, but its love affair with the myth of meritocracy might have something to do with it: if all you need to get the ‘best people’ is to believe in meritocracy, what use is data to you? The irony is, if these so-called meritocratic institutions actually valued science over religion, they could make use of the evidence-based solutions that do already exist. For example, quotas, which, contrary to popular misconception, were recently found by a London School of Economics study to ‘weed out incompetent men’ rather than promote unqualified women.71
They could also collect and analyse data on their hiring procedures to see whether these are as gender neutral as they think. MIT did this, and their analysis of over thirty years of data found that women were disadvantaged by ‘usual departmental hiring processes’, and that ‘exceptional women candidates might very well not be found by conventional departmental search committee methods’.72 Unless search committees specifically asked department heads for names of outstanding female candidates, they may not put women forward. Many women who were eventually hired when special efforts were made to specifically find female candidates would not have applied for the job without encouragement. In line with the LSE findings, the paper also found that standards were not lowered during periods when special effort was made to hire women: in fact, if anything, the women that were hired ‘are somewhat more successful than their male peers’.
Invisible Women Page 11