By Kip Havel, Chief Marketing Officer
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” That’s Sam Altman, CEO of OpenAI, sharing some (hopefully) tongue-and-cheek thoughts on the growing AI market. Some people fear the disruptive power of this technology. It will certainly change the world. But does that change need to be fearsome? We don’t think so as long as those in power implement this game-changing tech with decency and respect at every turn.
We’re not alone in this. Earlier this year, more than 1,000 AI experts, researchers, and backers (including Elon Musk and Steve Wozniak) called for a pause on the creation of “giant” AIs to study and mitigate the problems presented by the technology. Notice how they’re not calling for an end to research. Most are excited about the potential, but don’t want to greenlight innovation that could create an existential threat to people and the planet.
How can we create ethical AI? Or better yet, how can businesses maximize the power of AI while staying true to their mission and values? By applying some lessons from the past, success stories from the present, and ingenious mindsets for the future into every AI business decision.
History Doesn’t Repeat Itself but It Rhymes
AI isn’t the first technology to radically alter our paradigm in seemingly scary ways. Almost serendipitously for our purposes, one of the biggest scientific disruptors was at the center of a $950-million blockbuster this year. Nuclear fission, which resulted in the atomic bomb through the work of J. Robert Oppenheimer and the Manhattan Project, resulted in thousands upon thousands of casualties in Hiroshima and Nagasaki together and kicked off the doomsday dread of the Cold War.
How did a brilliant and thoughtful man, who attended the Ethical Culture Fieldston School no less, end up leading the charge to create one of the deadliest weapons in history? The answer is complicated, but there are some lessons we can take away for the present.
If you watch the interviews for The Day After Trinity (1981) about Oppenheimer and the development of the atomic bomb, you get a sense of the singular, almost unbending pursuit of innovation. Veterans of the Manhattan Project like Robert Wilson reflected on how researchers worked day and night with limited sleep and how “it was very hard […] to stop and think.” Maybe driven by a sense of patriotism or possibly rapturous creation, these scientists marched indefatigably towards their test south of Los Alamos.
However, this wasn’t only a risk during the patriotic fervor of World War II, but in the aftermath too. Freeman Dyson, famed physicist and mathematician, who worked with several Manhattan Project alumni had this to say:
“I felt it myself: The glitter of nuclear weapons. It is irresistible if you come to them as a scientist, to feel it’s there in your hands to release this energy that fuels the stars. To let it do your bidding, to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power and in some ways is responsible for all our troubles, this technical arrogance that overcomes people when they see what they can do with their minds.”
The lesson to be had? Innovation at the razor’s edge is hypnotic, drawing some of the foremost scientific minds into the exploration of boundaries and elevation of human knowledge and potential. In these moments of fervor, we need guardrails. Disruptors need to take a deep breath, put ego aside, and consider the implications of their actions. Discovery doesn’t exist in a vacuum and our decisions need to reflect that.
Consider this. There were proposals to detonate a test atomic bomb with a group of Japanese delegates invited as witnesses, but the plan never gained much traction. Could lives have been saved? We’ll sadly never know.
What’s Being Done in the Here and Now?
Though the mistakes of the past cannot be undone, business and government leaders can take action now to use this technology to make lives and our society better. It won’t happen without hard work and passion, but there are enough vocal advocates for the ethical use of AI in both the private and public sectors having meaningful conversations and making clear choices.
Look at the international community. This year at the Responsible AI in the Military Domain Summit (REAIM 2023), 47 nations endorsed a framework to “build international consensus around responsible behavior and guide states’ development, deployment, and use of military AI.” China and the U.S. have both shown interest in outlawing lethal autonomous weapons systems (LAWS) or access to nuclear grids while also codifying human-in-the-loop parameters for life-or-death decisions. Those are big steps, if governments stay true to their promise.
The private sector isn’t without companies putting their culture, principles, and people ahead of blind innovation. Look at Deloitte as an example. Their purpose is “making an impact that matters, together” and they see ethical AI as part of the picture. They have created a guide entitled “Transparency and Responsibility in Artificial Intelligence,” which not only calls other businesses to embrace this ethos, but offers AI-driven business models to capture this full potential in action.
The goal is to make AI recommendations as transparent as possible, explaining the underlying computational process to avoid biases or bad decisions. The reason being? AI is only as good as the model inputs and data.
Garbage in, garbage out can be risky business when AI is making decisions that impact the lives of people, communities, or nations. Deloitte and other businesses with artificial intelligence ethics on their minds are pushing for a variety of inspection methods to evaluate variables, open the black box to see how algorithms operate, and be on the lookout for emerging data biases. Principled AI leaders don’t rest on their laurels: They stay active and shepherd this technology to a more ethical future.
Yet these actions are only scratching the surface of what needs to be done for conscientious AI implementation. We’re looking at how people are responding to AI—and why our future depends on listening to the diverse perspectives of experts and everyday people.