Why we need to double our inclusion efforts in the age of AI

“A human rights approach to technology dictates that access and inclusion are at the forefront of discussions around usage, development, and implementation.”

- Tina Kempin Reuter, Carr Center for Human Rights Policy

Have you also noticed this skepticism in yourself when you are reading a piece of content: scanning for clues as to whether the text you are reading was written by a human, or cobbled together by one of the myriad algorithmic helpers that are now available on the market?

I often find myself cringing at AI-generated texts, because they tend to be generic, repetitive, and lacking depth and nuance. But that won’t always be the case - the large language models (LLMs) underpinning the AI revolution are learning fast and already it is becoming more difficult to distinguish human-generated from AI-generated content (which is why discussions around the flagging of AI-generated content are thankfully increasing).

But beyond the cringe and bore factors lurks a deeper threat to the goal of inclusive spaces and societies built on belonging. Which is our ultimate mission, after all. So we need to pay attention to this space and engage in discussions around ethical AI, AI justice, and safeguards.

When AI hallucinates it rarely dreams up positive futures for historically excluded and marginalized communities.

Even when algorithmic tools such as Open AI’s ChatGPT are used to compile facts and figures, the answers they give are rarely inclusive of anyone who is not part of the dominant culture.

The work of the Algorithmic Justice League (AJL) has been showing this impressively for many years.

🐘 Watch: ”How I’m fighting bias in algorithms”, TED talk by Dr. Joy Buolamwini, founder of the AJL

The use of generative artificial intelligence (GenAI) is seen as the next innovation step in many sectors of the economy and work. However, the ethical problems of GenAI such as the perpetuation of stereotypes and biases, underrepresentation of minorities, and copyright infringements, to name just a few, pose a real and present risk for its use in the field of Diversity, Equity, Inclusion, and Belonging.

”[…] The world’s most important and most valuable AI company has been built on the backs of the collective work of humanity, often without permission, and without compensation to those who created it.” (404 media)

It is our responsibility to make sure that the content we are creating, the data we are compiling, and the trends we are tasking algorithms to identify, don't perpetuate the exclusion and exploitation of already vulnerable and disenfranchised populations.

It is our responsibility to intentionally shine a light on those who have historically been rendered invisible.

And it is on us to make sure that people genuinely can shape spaces and experiences to the extent that they can in fact join, participate fully and access a sense of belonging.

In the AI age, we must become even more intentional about inclusion, equity, and justice.

Only through the intentional and informed use of new technologies can we take advantage of the opportunities offered by technological advancement, without sacrificing our humanity.

Next
Next

Defending Democracy: Beyond the Business Case for DEI