Last winter during a bitterly cold rush hour, a father and son were in a terrible car accident off the Kennedy Expressway in Chicago. Tragically, the man died before help arrived. Paramedics were able to successfully transport the child to the nearest hospital where he was brought into an operating room for surgery. The surgeon entered the room but immediately stopped saying, “I can’t operate on this boy, he is my son.”
Who was the surgeon? His mother. This slight variation of the surgeon’s dilemma story helps illustrate how unconscious bias works. Every day people unknowingly form opinions about others based on minimal input; this is known as unconscious bias. These thoughts are usually based on deeply held beliefs. No one wants to be biased, but it is part of being human. Unconscious bias can be related to race, gender, age, religion, sexual preference, veteran status, disability status, socio-economic status, college attended and many other attributes. In fact, at least 150 different unconscious bias types have been identified and studied.
In this article, we’ll explore ways that unconscious bias appears in talent acquisition, review how AI can be used to reduce bias in the recruiting process and share tips for how to select an AI partner that can help employers reduce bias.
Unconscious Bias in Talent Acquisition
While employers strive to uphold legal standards for equal employment opportunities, unconscious bias issues in talent acquisition still exist.
Unconscious bias can occur at many stages throughout the recruiting process. For example, a recruiter may unconsciously write job descriptions that appeal more to a certain group of people. A recruiter looking for an IT developer might advertise a role as a Java Ninja, which could discourage women from applying because the title uses more masculine language.
During the candidate screening process individuals might also experience affinity bias, a specific type of unconscious bias that occurs when someone with a certain background is favored. An instance of this might include a hiring manager seeking candidates with an MBA from a particular school. At a company level, bias can even extend to citing a company’s culture fit as a reason to hire a certain type of person, i.e., hiring only younger workers.
When unconscious bias spreads across a candidate pool, bigger risks, such as a lack of organizational diversity, may emerge. A Deloitte study found that a diverse workforce is twice as likely to meet or exceed a company’s overall financial goals. Another study by Catalyst cited a 34 percent higher return to shareholders for companies with more women in executive positions.
Without a diverse workforce, organizations run the risk of possible legal action. A recent age discrimination lawsuit against three large technology employers claimed millions of older workers were allegedly blocked from seeing Facebook job ads because of their age.
Outside of legal action, companies also face the possibility of accidentally harming their own recruiting efforts. Silicon Valley has long been accused of having a less than diverse workforce. However, 47 percent of millennials say they prefer working for a diverse company.
How AI Can Reduce Bias in the Hiring Process
Artificial intelligence can decrease unconscious bias in recruiting practices in two key ways.
- First, as a sophisticated pattern detector, AI can find bias across millions of data points.
- Second, when potential candidates are identified, AI can catalogue profiles based only on certain skill sets. AI can also be programmed to ignore all demographic information, like zip codes, race or gender.
While many vendors today offer AI-enabled capabilities for tasks such as interview scheduling or candidate communications, using AI specifically to reduce the challenges of unconscious bias is still emerging. Montage recently launched Unbiased Candidate Review that helps companies reduce discrimination during the selection and interview process. Unbiased Candidate Review, part of Montage’s on-demand voice and video interviewing solution suite, includes hiding the candidate’s identity and voice until a hiring manager enters feedback on the candidate.
Another example of fighting bias through AI includes the story of entrepreneur Iba Masood. As a native of Pakistan that graduated from college in the United Arab Emirates, Masood had a difficult time finding a tech job after graduating. She was not from the traditional pool of young, male, Ivy League candidates that seek developer roles. So she created her own AI solution, TARA, to combat bias in the tech recruiting process. Today, candidates that use TARA’s online freelancer marketplace are judged only by the code they produce. Companies looking to find project-based developers bid based on the current skills needed for a project with no knowledge of the candidate’s socio-economic or previous professional background.
Potential AI Risks
While promising as a solution, AI algorithms need to be built appropriately and monitored frequently to make sure AI does not perpetuate the bias it was programmed to erase. As AI emerges to help reduce unconscious bias, several groups, including federal agencies, are observing AI’s impact to make sure risks are appropriately addressed.
Some of these groups include the following:
- OpenAI, a nonprofit that creates AI systems via open source for the broader AI community to analyze.
- The AI Institute, which reviews AI’s ongoing impact on society.
- Explainable AI, which focuses on tracing the reasoning of AI algorithms back to its human creators so links are not lost.
In addition to these formal groups monitoring AI, organizations can take steps to make sure the correct AI processes are in place. Because AI is constantly evolving, errors in an AI platform’s logic can quickly grow, making problems hard to trace. This is especially true if errors are made at the beginning of the process causing the common problem of garbage in, garbage out. However, there are strategies teams can put in place to reduce risk:
- Recruiting teams can combine their expertise with data gathered from AI to produce more inclusive job descriptions and candidate pools in the future.
- Bias can also be reduced by setting strategies internally to identify and eliminate bias through training and other programs.
- Organizations should assign diverse teams to build AI algorithms so a wider set of ideas is represented in the AI’s output.
- Finally, companies should conduct ongoing audits of AI algorithms to test that efforts related to AI are progressing appropriately.
How to Select and Evaluate an AI Provider
When assessing enterprise AI partners for your organization, make sure to review the following:
- Ask questions. If a potential partner isn’t willing to explain how its algorithms work, walk away. A good partner is prepared to support your business and will be able to articulate how the solutions work in terms you understand.
- Understand the vision. Not all partners will be experts in your industry. However, from a technical perspective, ask to see a long-term product roadmap to understand plans for the company’s product evolution and what kind of influence you may have into the roadmap features.
- Agree on the support model. Make sure the partner has a thorough understanding of how you operate and how AI folds into that process. Share what is critical to you and ensure they’re ready to commit to supporting those items for you. Without this, your support of your own clients could be affected.