The Ethical Future of AI Depends on Your Business

By Kip Havel, Chief Marketing Officer  Why are over two-thirds of Americans in a Reuters survey saying they’re afraid ...

By Kip Havel, Chief Marketing Officer 

Why are over two-thirds of Americans in a Reuters survey saying they’re afraid of artificial intelligence? Honest answer? They’re afraid of the unpredictable consequences of the technology. That’s why businesses, governments, and IT experts need to take an active stake in the ethical implementation of AI, putting care into policy, practice, and practical application.

Though it’s easier said than done, it’s not impossible. In my last blog, I talked about lessons we can learn from past instances of disruptive tech and what’s being done now to mitigate risk. However, we need to build the right foundation and create the proper mindset to keep the future of artificial intelligence safe. Let’s take a look.

Giving the People What They Want (When You Can)

How do you handle AI with your people? The key is listening to employees about their perception of artificial intelligence. For example, Google chose not to renew a Pentagon contract for automatic recognition software in drones after 4,000 employees protested against the project. The Silicon Valley company still pursued defense contracts, but excluded any for the development of AI-powered weapons.

Since then, the Department of Defense has adopted their own detailed principles for using AI, prompting Google to return to bidding on Joint Warfighting Cloud Capability and other aspects of the military’s battlefield networks. There’s more work to be done, but one of the biggest companies in the world and the DoD grew ethically through the passion of a few people.

So, how do you mirror Google? For starters, create an open-door policy to talk about your artificial intelligence policy, asking for the input of your team along the way. At Dexian IT Solutions, we not only empower our people to create their own career trajectory; we come to them as we work to do good with innovation. Everyone at Dexian can share their thoughts, concerns, and ideas as we strive together to create a better future.

A Mindset for the Future

There is no shortage of voices who are calling for the ethical use of artificial intelligence in the business world and moving the dial in the right direction. However, the current approach likely has limitations and might need some retooling.

You’re likely familiar with Blue Sky Thinking, a mindset that encourages people to brainstorm in a way that isn’t limited by practicality. Yet there is some inherent limitation because you’re always trying to bridge a gap between existing practices and where you want to be. We like to follow the Black Sky Thinking model, challenging expectations and moving forward with creative assurance—especially with AI.

Here’s an example. Authors of a paper called “Labour, Automation, and Human-Machine Communication” explored the interactions of human-machine relationships in workplaces where AI has been implemented. One opportunity they observed during the process was a chance to not only educate workers on the inner workings of AI but empower them to take action.

Artificial intelligence will be as imperfect as the people building and training it. And that’s okay, if we keep that in mind. Giving people a voice to speak up and “help to open contestation paths for workers to talk back and question data, systems, and algorithmic outputs” can be the key to safe, ethical, and unbiased decision-making. If we didn’t have people to challenge algorithmic decision-making, we might not be here today (Google Stanislav Petrov if you want a hair-raising sense of how having a human in the loop makes all the difference).

Another way is realizing that soft skills are equally as important as hard skills in the creation of artificial intelligence. People who take ethical considerations into mind are a barrier between us and catastrophe. As Timnit Gebru, Founder and Executive Director at The Distributed AI Research Institute (DAIR), puts it, “You want people in AI who have compassion, who are thinking about social issues, who are thinking about accessibility.”

Your own hiring team as well as your technology staffing partner are your game changers on this front—especially if they’re committed to doing good. When they nurture genuine relationships in their talent pool, learn your values in and out, and eliminate barriers to finding people and principles that match, maintaining artificial intelligence ethics won’t feel like rocket science. And it can keep the world turning for the better.