SHIFT+CTRL: The Implicit Prejudice of Algorithms and How to Debug Them

By Sarah Haque


We are increasingly relying on algorithms to make complex societal decisions in lieu of humans. But, just like the humans who built them, algorithms are inherently biased. Now what?


The hand is dark-skinned, its palm and nailbeds a muted pink. It swipes the air beneath an automated soap dispenser, waiting. Nothing. 

The technology is simple: an invisible infrared light is activated when it bounces off a nearby hand. There’s a minor omission in manufacturing: darker colours absorb light, rather than reflect it. They forgot to account for black people.

It’s forgivable enough, laughable even. Maybe it racks up thousands of retweets and goes viral. But it doesn’t end with segregating soap dispensers.

Algorithms are at large. They curate our social media, dictate our dating lives, and record unimaginable amounts of personal data. Now, they press profoundly against our healthcare and judicial systems. Recent studies – and a steady stream of tech scandals – have exposed a rather weighty truth: machine learning algorithms can, and will, discriminate based on classes such as race and gender.

Tech-giants have had to frequently issue navel-gazing, feet-shuffling apologies for some of these algorithm-induced blunders. In 2015, Google Photos’ automated labelling service misidentified two black friends as “gorillas”. A year later Microsoft’s Artificial Intelligence (AI), ‘Tay’ Twitter bot, programmed to learn to say things from other users, was shut down within twenty-four hours after Tay’s tweets morphed from ‘Humans are super cool!’ to ‘Hitler was right, I hate the Jews.’ In 2018, Amazon scrapped an experimental machine learning recruitment tool which favoured men over women for developer jobs. As recently as November 2019, Apple was vilified for its highly anticipated credit card, giving men up to twenty times higher credit limits than women, despite their poorer credit scores. The list goes on. 

So are algorithms prejudiced? The short answer is: yes. The long answer is: yes, but there are culpable reasons for it, and therefore very real possibilities for change. 

‘502 Bad Gateway’– Biased Algorithms 

The bulk of the scientific literature on algorithmic bias is frightful, hopeful and astute. It is also overwhelmingly in agreement that these dangerous biases of machine learning algorithms not only exist but will continue to develop without proper acknowledgement. 

‘Algorithms simplify and generalise,’ says Dr Robert Elliott Smith, computer scientist and author of Rage Against the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All. ‘That’s what they do. There’s no such thing as an unbiased algorithm.’ He adds.

Dr Smith was working on Artificial Intelligence (AI) back when it still sounded like Klingon. As an expert in evolutionary algorithms with thirty years of experience, he has conducted research projects for organisations including NASA, the European Union, the US Army’s Strategic Defence Command, and British Aerospace. I caught Dr Smith for a rare sliver of time between his lectures at UCL and obligations as CTO of BOXARR Ltd. – a company which creates tools to understand complex computational systems for the Ministry of Defence, the Australian Government, and BAE Systems.

Our webchat begins with me apologising for the blank screen on my end. My laptop’s webcam has a sticker over it. He laughs as I blame Mr. Robot. The lilt of his Southern accent – hailing from Birmingham, Alabama – and use of queer colloquialisms I have to google shortly after – ‘hay while the sun shines’ – instils an organic sense of friendliness in every drop of ‘t’ or curl of ‘r’. 

Dr Smith speaks remarkably plainly about how algorithms work. Algorithms require data sets. That’s how AI, using machine learning systems, ‘learns’. When you build an algorithm you’re inducing a bias onto that data set. Dr Smith cites a study from his Twitter feed to elaborate: ‘If you look at a space of data, and you’re trying to divide that data to simplify it in some way, there are an infinite number of ways to do that. So, you have to make a choice. And you make a choice usually based on, kind of, the geography of the data space. You basically say, “the points nearer together are points that are similar, and I’ll treat them the same.” And when you do that, you effectively induce a bias in the space of all possible ways of interpreting data – you are interpreting it on some measure of nearness. And that’s a bias.’

But the reality is that the data is potentially biased before you even touch it. The way it has been gathered, through methods like surveys, tests, or online interactions inevitably forms the shape of that data. If certain groups are left out of the data set, the AI simply won’t register their characteristics. 

‘And another thing,’ Dr Smith says wryly, and I think he can hear me shaking my head. ‘Effectively, these algorithms are ‘black boxes’, meaning any biases in them that exist, either intentional or emergent, are intractable and unknown.’

Think of it like a simple input-output process. A biased input is fed in. A biased output is spat out. In between exists a black box wherein it’s not clear how the decisions have been made. This is happening at lightning speed, on a global scale. 

‘Those biases,’ he tells me, ‘will inevitably reflect factors that are embedded in our society. For instance, our society, and this is global society, is sexist. So, it is unsurprising that current data – say, generations of credit reports – will reflect sexism.’

A Microsoft research study from 2016 agrees with him, reporting sexist word embeddings in language-based algorithms, which projected occupations such as ‘philosopher, ‘captain’, and ‘protégé’ onto men; and ‘homemaker’, ‘nurse’, and ‘receptionist’ onto women.

MIT’s ‘Gender Shades’ study, spearheaded by computer scientist Joy Buolamwini, found that leading facial recognition programs ‘performed better for lighter individuals and males overall’. These types of software ‘saw’ black women the least – if at all. Joy calls this algorithmic bias ‘The Coded Gaze’. She talks of the day when bemusing mis-tags of friends on Facebook photos becomes a serious misidentification of a suspected criminal. 

That day, it seems, may not be too far off. It was recently revealed that owners of the King’s Cross estate were using facial recognition technology, including images supplied to their database from the London Metropolitan Police, to scan the faces of the public without their knowledge or approval. According to a report from Georgetown Law, ‘over 117 million American adults’ have their faces in facial recognition networks, already. These databases are unregulated and are using algorithms that have not been audited for accuracy. 

Another study in the US found that biased algorithms are being used for perhaps the most Orwellian concept since mass surveillance: predictive policing. Scores – known as risk assessments – are churned out by a software which determines how likely someone is to repeat offend. The researchers obtained risk scores assigned to more than 7,000 people arrested in Florida over 2013 and 2014, and cross-checked those predictive scores with how many of them were actually charged with new crimes over the next two years. Their conclusion: the algorithm is biased against blacks. This software, used across the US, is likely ‘to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate as white defendants.’ This disparity is not explained by prior crimes or the type of crimes they were charged with. It is inexplicably racial.

‘People at work, they often tell me to watch Black Mirror,’ says Brittani Smalls, Director of Operations of the non-profit organisation Women Who Code. ‘I tell them these things are happening already.’ 

My chat with her comes at a crucial time. I’m waning, burdened by the dystopian reality of our current algorithmic infrastructure. Ms Smalls is an esoteric relief. She lives in a region of the US where race is palpable, and slavery is not as ancient history as people would like to believe. Snapchat filters don’t work on her dark-skinned brother or her nephew because they’re just too black to be seen

‘Being a woman of colour is being invisible and hyper-visible, all at once,’ she tells me. Ms Smalls migrated from the male-dominated world of finance, to the male-dominated world of tech three years ago. ‘I mean, it’s the intricacies of life. I think, if not me, then who?’

We talk about a study in Science which proves an algorithm used in healthcare has large racial biases, effectively ensuring unequal care for black people, despite them being considerably sicker. Ms Smalls is unsurprised: ‘You know, the opioid epidemic in the US, right now… Black people, we were saved from that. A lot of doctors think black people have higher pain threshold – because of slavery – so white people get prescribed more opioids. We got lucky there,’ she says.

Ms Smalls helped create the app, Paratransit Pal, to allow people with disabilities to use public transport better. ‘You have a responsibility, especially when you’re coding’ she says.

Dr Smith is a firm believer in boundaries: ‘When you’re making critical decisions about human beings’ lives you have to bring the human element to bear. The reality is that AI, as it exists now, is nowhere near advanced enough to do that.’ 

He interrupts himself to clarify, ‘Now, that’s not to say all algorithms are bad. Or that they’re all bigoted. I’m really not saying that. What I am saying is that when you treat the complex systems of humanity and human society in quantitative ways then you have to exercise caution about the outcomes. Because, invariably, those quantitative ways have biases and flaws.’ 

‘406 Not Acceptable’– Debugging Society

Algorithms are not malignant. In fact, they are what led me to Dr Smith and Ms Smalls. However, a popular misconception is that algorithms are objective fact. Algorithms, like the humans who build them, are inherently flawed. Cathy O’Neil, computer scientist and writer of Weapons of Math Destruction puts it plainly: ‘Algorithms are opinions, rooted in maths.’

The narrative of powerful, uncontrollable AI is trite and inaccurate. It is also a very common pitfall in media’s approach to tech. For Dr Smith, this is a very real problem, ‘Reporting AI as smarter than it really is undermines human intelligence, undermines belief in ourselves.’

There are three key approaches to tackling the flaws in our current algorithmic infrastructure: transparency, regulations, and diversity.

A fundamental issue with algorithms is that much of how it works, and the data it uses, eludes us. There are crucial questions we need to ask: what data is used? How was this data obtained? And how does the algorithm use it to make decisions? To achieve ‘fairer’ algorithms, we urgently need to pursue transparency, and Explainable AI (XAI), which allow us to understand exactly why the automated systems behave in the way they do. 

The law is falling miserably behind innovation. Algorithms within technology and social media spheres, as it stands now, are largely unregulated. Guidelines need to be enforced to ensure data rights, to monitor and investigate discriminatory automated decisions, and hold new, powerful technologies accountable. ‘What needs to happen is governments need to go in and basically say, “the way that your algorithms work in feeding people content has to ensure a degree of coverage,” ’ says Dr Smith. Governments desperately need to catch up. Leaps in innovation are hurtling us towards a cliff’s edge, beyond the surface of which we sink, headfirst, into an unfettered world where data is liquid and we’re all left clutching at thin air.

Diversity of expression, of mind, and of coders is a big issue. The fact – one that is often spoken about but to little avail – is that there is just not enough diversity in tech. As of 2016, 63% of computer science graduates were white. ‘Only 15% of engineering graduates in the UK are women, and that’s terrible. There’s no good reason for that whatsoever. We’ve got a lot of work to do,’ says Dr Smith. Women Who Code do just that. The organisation offers scholarships, provides a community of support, and hosts events and talks across the world for women in STEM. Ms Smalls believes they could be a catalyst for change, ‘It’s not just diversity, it’s inclusion. Women Who Code could be the organisation that says, “hey, this is not okay.” ’

For Dr Smith diversity preservation is a technological value. It is a concept rooted in as much scientific history as ‘survival of the fittest’, with comparatively little to no credit. ‘Diversity is a fundamental part of an evolving system that allows it to cope with the unforeseen,’ he tells me. ‘The inevitable unforeseen. We need to realise that as a value.’ For diversity to work on this scale, it has to be ingrained into the very zeitgeist of our society. 

He’s a self-diagnosed optimist, however: ‘I think we’ll get to a better future; I think people will fight for it and make it happen. I do. I think mechanical intelligence and human intelligence will work together better once we realise, they are separate from one another.’ 

Dr Smith speaks how he thinks: quickly and in nonlinear, broken fragments with moments of clarity and great depth. Some of his haphazard philosophical musings I jot down and mull over days later. This is one of them, buried in the transcript of our hour-long talk: ‘When we talk about artificial intelligence, we act as if intelligence is some abstract quality that can be pulled out of an individual and be described separately. The reality is, I don’t think that can be done; it’s an integrated quality. That’s the grand reality of the century – that quality actually exists. Quality exists as a separate thing from quantity.’ 

Ultimately, there is no hurry to succumb to the blue screen of death. There’s still time to fix this. Awareness, regulations, transparency, and preservation of diversity are merely a handful of debugs to algorithmic biases. Algorithms will change, inevitably, when computing becomes less binary.  

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s