How do you hire the best people fast and efficiently? More organizations are pointing to artificial intelligence hiring tools as the de facto answer to this question, thanks in large part to the promise of streamlined workflows, reduced biases, and rapidly filled vacancies. Though these algorithms feel swift and seamless, they’re not without their shortcomings and biases on the back end.
In truth, AI is only as good as the training data and their decision transparency. If the recommendations of an AI screener or video interviewing tool aren’t validated, businesses can inadvertently reinforce discrimination, limit diversity, or approve candidates based on incorrect criteria.
What’s the fix? Researchers, advocates, and thought leaders are encouraging companies to keep a human in the loop for decisions that not only impact their roster, but the entire workforce. Here are some key considerations.
Why You Need a Human in the Loop with AI Decisions
First, let’s look at the problem: algorithms don’t always process data as humans would. Sometimes, that’s great, because AI can find connections you might have missed. Other times, these tools can fixate on the wrong patterns, treating coincidental correlations as significant predictors. Without human oversight and evaluation, these tools can make small mistakes or major faux pas.
Amazon’s notorious AI recruiting tool is a great example. By training the tool on applications that were predominantly from men, the tech giant accidentally created a resume screener that devalued female applicants. This meant resumes including keywords like “women’s basketball” or “Girls Who Code” could be penalized unintentionally. Eventually, Amazon identified the issue, but involving human reviewers from HR or DEI disciplines might have caught the issue earlier in testing.
Another potential issue has to do with transparency. Many of these artificial intelligence platforms are black boxes, providing no discernible insights into how or why they make their decisions. Worse yet, some of them clearly aren’t calibrated to evaluate candidates on the traits, skills, and experiences they need to thrive in a job.
In “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted & Fired & Why We Need to Fight Back Now,” by Hilke Schellmann, the author tested AI-powered digital interviewers to see how these tools evaluated candidates during unusual circumstances. She and a grad student read unrelated Wikipedia entries aloud in German and Chinese for an English-speaking position, only to find that both received passing grades. The transcripts automatically generated by the tools had converted both languages into English gibberish. Were the tools ranking them best on tone of voice alone? The answer wasn’t clear, but unqualified candidates could have easily made it to a second round.
AI systems also tend to fail to factor in candidates with disabilities. Because these groups are underrepresented in the workforce (Americans with disabilities have an unemployment rate of 7.5% compared to the 4% overall unemployment rate), fewer AI platforms are trained with these candidates in mind. As a result, this can create hurdles that deepen the exclusion of these professionals from the talent pool.
Here are just a few examples: When AI interview tools evaluate facial expressions looking for “universal emotions,” they might disqualify people with cerebral palsy or who have suffered strokes that are outliers from sample training data. Or relying exclusively on AI assessment games or auditory questions might impact those with dyslexia, ADHD, or hearing impairment who might take longer to absorb information before they give their thoughtful responses.
In these and other cases, experienced HR professionals and recruiters can adjust in ways that artificial intelligence, without proper human guidance, cannot. If you want to broaden and diversify your talent pool rather than needlessly restrict it, you need to find a way to harmonize both AI and people.
Creating a Better Balance Between Humans and AI in Hiring
In our experience, finding an equilibrium between algorithms and people is a very intentional process. There’s a fair amount of trial and error to get the balance right but working with the right IT staffing and solutions partner can help to identify opportunities for change. Here’s where to begin.
Evaluate Your AI Capabilities
Do you have a firm understanding of how any of your current AI hiring tools work? If you don’t, your vendors definitely should. Quality AI hiring support should be able to articulate how their algorithms function, which variables they measure, and what conditions will disqualify applicants. If they treat their AI as an ineffable black box, you’ll never be certain if their rubric accurately screens better candidates.
Additionally, you need to involve HR and DEI experts while you’re defining requirements, testing the training data, and launching the solution. They can help to identify weak points and unintentional biases in the process. Plus, they can also help account for any disability accommodations you’ll need to make or representative results you’ll need to incorporate to fight discrimination or bias.
That way, you can rectify issues before they impact your workforce or create discrimination lawsuits in the making.
Have Humans Review AI Results
Even though artificial intelligence can accelerate reviews, there is sizable distrust from candidates about AI’s ability to accurately assess talent. In our Work Futures: Trends Impacting 2024 report, we found only 24% of workers believed AI should be used to review resumes and applications. Much of the negative sentiment has to do with horror stories in the press as well as the often inscrutability of how these algorithms make their decisions.
You can instill confidence in both candidates and your organizational stakeholders by effectively auditing these tools. On a regular basis, it’s important to review a sampling of AI assessment scores so you can see why candidates with different rankings were scored the way they were. That way, you can develop familiarity and trust with the algorithm, rather than taking its scoring system on faith.
Moreover, you need to condition decision makers to willingly override AI assessments when they make an inaccurate or biased conclusion. People will often defer to the judgment of artificial intelligence, even when they know it’s wrong, because people perceive machines to be more objective. This process might take some conditioning before people feel comfortable pushing back, but this practice will save you from false positives and keep great candidates from being ignored through false negatives.
Review Your Hiring Processes Regularly
Your employees and consultants offer valuable insight into how your AI tools actually work. At Dexian, we make sure to give candidates a voice to help our clients improve their hiring process. On a monthly basis, we assemble a core group of our consultants, interviewing them before they start their specific assignments. We treat these interactions as a focus group, digging into the good and bad of the hiring lifecycle, to see what works and to remedy any friction points.
This increasingly can help us identify issues with AI implementation as well as the overall hiring process. Feedback from consultants who’ve experienced your AI assessment and interviewing tools can help you to enhance sourcing, screening, interviewing, onboarding, and retention in critical ways that might have otherwise been overlooked.
Choose an Ethical AI Partner
You don’t have to navigate these complicated processes alone. Picking experienced IT staffing and solutions partners can simplify the process.
Entrusting your hiring tools to vendors that prioritize ethical artificial intelligence services can help avoid situations where your candidates are unfairly evaluated. Plus, if you’re working with a staffing firm that knows how to navigate your AI tools, candidates will be better prepared to maximize their responses to AI tools.
With Dexian, you’ll receive expertise on both sides so you can hire fast, efficiently, and with the least amount of bias.