The future of AI might be scary, but let’s focus on why it’s scary today
In a recently published open letter, some of the most prominent figures in technology urged for a full-fledged six-month pause on developing certain Artificial Intelligence technologies, including Elon Musk, CEO of SpaceX, Tesla and Twitter; Apple co-founder Steve Wozniak; and Pinterest co-founder Evan Sharp, among many, many others. In the letter, they ask that all companies working on AI systems more complex than GPT-4 agree to a hiatus, and in that time, create a set of policies and accountability measures to ensure that technologies with “potentially catastrophic effects on society” are better governed. They even go so far as to suggest that if companies can’t come to good conclusions on their own, government agencies should intervene.
The letter is future-focused, entirely centered around what AI may do rather than what it is currently doing. While these dangers are certainly real and worthy of our attention, framing AI as solely a future problem is both a dismissive and dangerous approach. It also shifts our understanding of what AI actually is in problematic ways.
This letter conduces an understanding of AI that we might imagine in a technologically dystopian future. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asks. “Should we risk loss of control of our civilization?” It paints a rather bleak picture, and perhaps justly so. After all, if some of the most prominent developers and educators in the field of AI see this as a genuine possibility, who am I to disagree? Focusing on these particular fears, however, and claiming that preventing these potential dangers is what we should be focused on are the parts that are more concerning.
We shouldn’t think of AI in a sci-fi-esque way, conceptualizing it foremost as amorphous robots that may someday outsmart us and take over our societies. Artificial intelligence is real and more or less everywhere. It is also incredibly problematic, rife with biases and inaccuracies that cause real harm on a large scale.
Artificial intelligence software develops by learning from the information it’s fed, with the goal of ultimately being able to generate similar outputs by itself. This means that if the data set is problematic, the artificial intelligence will be too.
Seeing how dangerous AI bias can be is challenging, particularly because the biases it reflects are often so deeply embedded in our cultures and societies. Thanks to repugnant Twitter trolls, however, we have a more obvious example of how this learning system functions.
In 2016, Microsoft released Tay, an artificially intelligent chatbot on Twitter, designed to learn from what was tweeted at it, with the hopes that it would organically learn how to interact with other users. Twitter trolls were quick to feed it racist, anti-Semitic and sexist rhetoric, and, before it got shut down in just 16 hours, it had created an abundance of abhorrent tweets, including an endorsement of a race war, “Hitler was right I hate the jews,” and “I fucking hate feminists and they should all die and burn in hell.”
Tay and even more recent and similarly problematic chatbots make many of AI’s problems most salient. First, it’s an easily understandable model for how AI systems learn, and we can see without a doubt that problematic inputs lead to problematic outputs. Tay is also a great example of another problem: the failing of its designers. How had no one seen this coming? Why didn’t any of Tay’s makers account for where inequity and bigotry might show up?
Insufficient forethought and data sets with bias have real life implications, and while they may be subtler than Tay, their consequences are just as real. Examples of this are all around us.
In 2018, Amazon had to scrap an AI recruiting software it had been using for being biased against women. Its programmers thought it would sift through resumes to find the most eligible candidates, but this AI learned by studying the resumes of already-hired employees. Because Amazon had already been hiring more men, their recruiting algorithm learned to favor male candidates. The AI recognized gendered language and penalized candidates for attending women’s colleges or using language that was more associated with women. It had inadvertently been taught that women made for worse candidates.
Problematic, racist AI is even used by government agencies. For years, many states were using algorithmic risk assessments to determine whether or not to set bail for those arrested awaiting trial, as well as to determine sentencing. It was intended to determine whether a candidate was a flight risk or whether or not to give them bail. The history of policing, however, with Black people being disproportionately targeted and jailed, made it so that the algorithm saw Black people as posing a greater danger.
AI is also being used in a great deal of facial recognition softwares, which, too, have intense and biased applications. First, these AI facial recognition softwares contribute to a mass surveillance state, which raises many security concerns and poses a great threat to citizens’ right to free speech and ability to protest their governments. In the summer of 2020, when BLM protests in response to the death of George Floyd at the hands of police officers were happening nationwide, facial recognition software was used to find and punish protestors.
In addition to posing a threat to democracy, these softwares don’t entirely work properly, despite being used to make real arrests. These algorithms are up to 10 times worse at identifying Black faces than white ones, and there have been multiple incidents of Black people being wrongly arrested as a result of facial recognition software.
AI poses real threats. It enacts real harm. Systems of social inequality that drive our society are not avoided through AI, but rather strengthened as these algorithms simply learn from and reproduce them.
When current applications of AI are as influential and problematic as they are, centering AI discourse on a distant, machine-led future misses the mark. Particularly because the letter makes no reference to the ways AI disproportionately harms certain groups or perpetuates ongoing inequalities, it falls short.
This letter is signed by some of the biggest names in tech. Therefore, the letter suggests that AI is detached from its makers — an idea that cannot be ignored. The development of AI, and the harm it causes, has everything to do with the power imbalances in who gets to make it. That this letter fails to recognize this, acknowledge any contemporary AI problems, or make any attempt at taking accountability, makes it a disappointing distraction from where our attention should be. The biggest issues with AI aren’t going to come about later — they’re happening now.
Lila Dominus is a student at the University of Michigan. This article was originally published in The Michigan Daily and is posted here with permission from Lila Dominus.
Like most of the pictures on TeensParentsTeachers, the picture posted with this article is courtesy of a free download from Pixabay.com.