You’ve heard the expression a rising tide lifts all boats, but is that overly optimistic? It comes as no surprise that a well-maintained yacht is going to perform better than an old, patchwork dinghy as the water level goes up. Some boats (and some people) are given better chances of riding the tide of prosperity and innovation.
Look back at the adoption of personal computers to see unbalanced innovation in action. By 1999, the U.S. Bureau of Labor Statistics found that while 49.1% of Asian households and 36.1% of white households had a computer, only 17.9% of Black households did. Even today, only 69% of Black families and 67% of Latino families own a desktop or laptop computer. This lack of exposure to computers at an early age likely hindered the representation of Black Americans in high-paying tech jobs today (Black and Latino Americans make up 8% of the tech field, respectively).
Artificial intelligence is another rift in the making. We’re seeing societal biases carry over into the algorithms that are reshaping our world in real-time. It’s concerning, but not a problem that’s beyond repair. From our perspective, there are plenty of ways businesses, nonprofits, and government agencies can leverage this technology when uplifting disadvantaged groups and empowering local communities. But first, they need to address the problem.
Recognizing the Negative AI Biases
First things first: Can artificial intelligence perpetuate or even worsen existing biases and inequality? Absolutely. Without too much digging, you can find numerous stories about AI adopting harmful biases, stereotypes, or discriminatory perspectives. In all cases, these miscalculations create a new series of hurdles for historically disempowered groups to recognize, address, and overcome.
Let’s start with subtle yet still important examples: creating visuals with AI image generators. These algorithms are not unbiased, having scraped the internet and databases that at their best have slanted perspectives and at their worst are rife with racist, misogynistic, homophobic, and other discriminatory content. No matter the data scientist’s intention, training data building from these harmful lessons will influence how AI produces pictures.
Reporters for the Washington Post came up with some excellent examples, but I thought I would reiterate the point with my own experiment.
Create an image of recruiters making calls in an office setting
Create an image of an IT professional getting interviewed by a recruiter
Not every example of AI-generated content will show glaring bias, but there is a clear lack of diversity in what has been created. You can tweak and fine-tune prompts to generate greater inclusion across the results, but this takes additional prompting that some people won’t think to do and even then has limitations. Subtle uniformity in media can create a sense of “othering” or prejudice for the people being misrepresented or erased.
More than just reinforcing stereotypes, artificial intelligence with entrenched discrimination can do substantial financial harm. Look at the mortgage approval process. LendingTree analysis shows that 14.4% of Black homebuyers are denied a mortgage compared to the 9.14% denial rate throughout the overall population. If those lending organizations trained on that specific data when learning whether to approve or deny homebuyers, automated systems might start to discriminate against Black families trying to buy their first home, misperceiving them as a higher risk.
What’s worse is that these biases, stereotypes, and discriminations can enter an unquestioned feedback loop. In an experimental study of 200 people, researchers tested out whether they could influence humans to make incorrect decisions if AI was feeding them bad information. And it was almost too easy. When tasked to identify a fictional disease, many human participants would learn to accept the false positives or false negatives given by AI, even when 80% recognized the AI tool was making mistakes.
Looking at the Big Picture
Not only do organizations need to think of these negative AI biases as they exist independently but also determine how they can compound to do greater damage. Biases against hiring women (like what accidentally happened with Amazon’s automated hiring tool) can feed into assumptions and perspectives that impact training data and entrench prejudices as just “reflecting reality.”
In short, companies and communities need to take an active role to counteract these aggravated and multiplying biases. But how can they take action to empower disadvantaged communities? We’ll provide you our perspective in our next blog. Until then, be on the lookout for biases in AI systems. Creating a foundation for AI excellence now will determine if all boats rise with equity in the future.