Experts warn Artificial Intelligence threatens to ‘ethnically cleanse’ visions of the future

AI is currently far too white for human diversity to flourish

Mark Cantrell
7 min readOct 21, 2020

Experts at Cambridge University have warned that by failing to reflect the true diversity of our species, AI technology risks denying people of colour a presence in our imagined world of tomorrow. By Mark Cantrell

Stock image courtesy of Pixabay

ARTIFICIAL intelligence (AI) may not pose a mortal threat to humanity, as is so often depicted in science fiction, but it does apparently threaten an ‘ethnically cleansed’ vision of the future.

That’s putting it a little harsh, perhaps, but what else can you call it when entire groups of people are effectively airbrushed from the scene? Yet it’s not really the technology per se that’s responsible; it’s more the biases at play in the minds of its human creators.

The way we see AI — good and bad — is filtered through perceptions instilled by our experiences of wider society, with all its everyday norms and expectations. All-too-often we’re unaware of how these have been shaped by a mix of conscious and unconscious bias that is rooted in a variety of social, cultural and economic inequalities.

Often, it’s the arena of everyday culture where these blinkered expectations are forged, or at least implanted and reinforced. The way AI is depicted, for instance — whether in stock photographic images, cinematic robots, or the dialects used for virtual assistants — tend to emphasise the ‘Caucasian’ as a singular frame of reference.

As a result, say experts from Cambridge University’s Leverhulme Centre for the Future of Intelligence (CFI), our vision of AI is overwhelmingly white. The consequence, however unintended, removes people of colour from any vision of humanity’s high tech future. They are, effectively, erased from the scene.

“Stock imagery for AI distils the visualisations of intelligent machines in Western popular culture as it has developed over decades,” said Dr Stephen Cave, executive director of the CFI. “From Terminator to Blade Runner, Metropolis to Ex Machina, all are played by white actors or are visibly white on screen.”

Furthermore, he adds: “Androids of metal or plastic are given white features, such as in I, Robot. Even disembodied AI — from HAL-9000 to Samantha in Her — have white voices. Only very recently have a few TV shows, such as Westworld, used AI characters with a mix of skin tones.”

The issue is far from abstract; the way we imagine the world of tomorrow in the here and now helps shape whatever future we do manage to pull out of the raw possibilities. You might say we make the future in our own image. And that’s a scary prospect for those who don’t fit the ‘norm’, if humanity’s sense of itself fails to reflect diversity to the full.

“White culture can’t imagine being taken over by superior beings resembling races it has historically framed as inferior.” Dr Kanta Dihal

We might be tempted to write this all off as an abstract issue, of course, but it already has real-world implications that ought to give pause for thought. As the CFI researchers argue, such a narrow portrayal of AI risks creating a “racially homogeneous” workforce of aspiring technologists, building machines with “bias baked into their algorithms”.

“Given that society has, for centuries, promoted the association of intelligence with white Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a white machine,” said Dr Kanta Dihal, who leads CFI’s ‘Decolonising AI’ initiative.

“People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialised as white that could have dangerous consequences for humans that are not.”

AI, of course — or rather those limited machine-learning systems currently touted as such — is proving anything but infallible. Worryingly, such concerns of “baked in bias” have already cropped up in the wild, as it were. Even if no discrimination is intended, assumptions encoded in the algorithm, and in the manner by which data is harvested and collated, have served them up as a perfect agent of plausible denial for old-fashioned prejudice.

None of this need be the case, according to the CFI researchers; our machines don’t need to be the codified embodiment of our bias. Unlike them, we’re not bound by programming; we can make a choice. With a little more self-awareness, and a willingness to look beyond our blinkered thinking, we can challenge such narrow perceptions and effectively “decolonise” artificial intelligence.

Such racial thinking discussed here doesn’t just apply to our whitewashed rendition of AI, the CFI team suggest. As they argue, there exists a long tradition of “crude racial stereotypes” when it comes to extraterrestrials.

Think of the “orientialised” Ming the Merciless, or indeed the leaders of the Trade Federation in the Star Wars prequels. Or, for that matter, we might consider the grotesque Caribbean caricature that is Jar Jar Binks.

Whether we like it or not, these are influenced (if not actually informed) by racial stereotypes created and distilled for the service of past imperial domination — to justify the white man having the global whip hand.

In that regard, there’s an added twist to the depiction of AI as white, say the researchers; unlike species from other planets, AI concentrates those attributes that were used to “justify colonialism and segregation” in the past: ‘superior’ intelligence, professionalism and power.

In that sense, AI is the ultimate disinterested technocrat — coolly making decisions from on high for the ‘greater good’ — but equally, it stands as a proxy for those old hierarchies of power supposedly consigned to the history books in these allegedly more progressive times. The twist really bites in the depictions of AIs rebelling against their human masters.

“AI is often depicted as outsmarting and surpassing humanity,” said Dihal. “White culture can’t imagine being taken over by superior beings resembling races it has historically framed as inferior. Images of AI are not generic representations of human-like machines: their whiteness is a proxy for their status and potential.”

Dihal, together with Cave, present their case in a new paper on decolonising AI published earlier this year in the journal Philosophy and Technology.

The paper brings together recent research from a range of fields, including Human-Computer Interaction and Critical Race Theory, to demonstrate that machines can be racialised, and that this perpetuates “real world” racial biases.

This includes work on how robots are seen to have distinct racial identities, with black robots receiving more online abuse, and a study showing that people feel closer to virtual agents when they perceive shared racial identity.

“One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard white middle-class English,” Dihal added. “Ideas of adding black dialects have been dismissed as too controversial or outside the target market.”

The researchers explain they conducted their own investigation into search engines, and found that all non-abstract results for AI had either Caucasian features or were literally the colour white.

Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva Credit: ITU/R.Farrell Creative Commons.

A typical example of the kind of AI imagery adorning book covers and mainstream media articles is Sophia, they say: a Caucasian humanoid declared an “innovation champion” by the UN development programme. But this is just a recent iteration. There’s plenty more where she came from.

“Portrayals of AI as white situate machines in a power hierarchy above currently marginalised groups, and relegate people of colour to positions below that of machines,” Dihal says “As machines become increasingly central to automated decision-making in areas such as employment and criminal justice, this could be highly consequential.”

Science fiction has often played a powerful role in inspiring social and technological progress, but the genre — like the writers that create it — hardly exists in a vacuum. It’s steeped in the society that makes it, with all the assumptions, unconscious bias, and even the outright prejudices that entails.

So, however much adherents of science fiction might like to preen themselves on the genre’s oft-claimed progressive virtues, there is no escaping its reactionary tendencies. As with everything else in our inequitable society — it needs to be challenged and called out to keep it honest.

All that said, however — and at the risk of being facetious — we might suggest it’s rather apt to present AI as white. The malevolent kind, that is.

After all, if AI is some codification of a white ‘cultural ideal’, then it simply reflects the bitter truth of Europe’s history of conquest, slavery, colonialism and rapacious empire — not to mention the toxic ideologies (algorithms, you might say) formulated to justify its brutal grip, and which continue to poison our global civilisation to this day.

More reason, surely, to rip away the whitewashed mask and allow AI to represent all colours and none.

As Dihal says: “The perceived whiteness of AI will make it more difficult for people of colour to advance in the field. If the developer demographic does not diversify, AI stands to exacerbate racial inequality.”

MC

This article first appeared on Mark Cantrell, Author.

--

--

Mark Cantrell

A UK writer and journalist, Mark Cantrell is also the author of two novels: Citizen Zero and Silas Morlock. Read more of his work at tykewriter.wordpress.com