AI Bias in Recruiting: Why Does It Happen?

Michelle Lee is currently a Data Scientist at Atipica and is a Senior Data Analytics Instructor at Product School.

Amazon was recently reported to have built a machine learning model aimed at ranking candidates for its open roles. However, the team building the tool was disbanded due to the biases learned by the model. For example, the model ranked candidates lower if their resumes included the word “women’s”, such as “women’s soccer team.”

Moreover, a former technology and economics White House adviser argues that: “If the engineers developing [recruiting] tools commit certain kinds of errors or oversights in gathering their input data or building machine learning systems, unintentional bias could be the direct result.” Let’s look a few ways in which biases could be learned by a model:

Biased training data: Supervised machine learning models are set up to learn which features predict a target (e.g., hired or not). Supervised learning models will look at historical data (e.g., previous hires) to determine which features significantly predict the target.

  • For many tech companies, men still outnumber women, especially in technical roles. If a model only sees skewed historical training data, it will learn that men are more successful candidates and rank them higher.
  • Even if the model does not take gender into account, it may pick up on features that correlate to gender, such as being a part of a women’s club, or the age at which a candidate became interested in computing.

Machine learning teams should strive to use diverse training sets for their models to minimize the possibility of AI biases.

Feature creation and selection: Common text-processing algorithms, such as bag of words, count the number of occurrences of all words in a corpus to classify documents or predict sentiments.  

  • Using all words as features can give the model more information, but can replicate real-life biases. An example is looking at whether a resume included the word “women’s”.
  • Limiting the types of words used by the model can limit biases. One way to approach this would be to only use words that relate to an applicant’s skills (e.g., Python, SQL, Docker, etc.) in the modeling and not gender, racial, or cultural attributes

Machine learning teams should promote fairness through their feature choices, and focus on features that reflect an individual’s ability to do the job.

While bias can easily creep into hiring algorithms with real-life implications, the HR industry is becoming more reliant on AI. This scares us a little bit. In a study of 500 job seekers and 500 job professionals, about 35% of talent acquisition (TA) professionals used artificial intelligence (AI) in the candidate selection process. TA professionals should ask their HR tech vendors the hard questions: Who is/are the founder(s)? Are they passionate about people-first AI? Who built the models? How do they minimize bias in their algorithms?

Just because one company was unable to build an unbiased hiring model does not mean it cannot be done. Intention is key.

And as the HR industry is starting to rely more on AI tools, building them to be inclusive is even more important. For years, Atipica has advocated for an increased focus on diversity and inclusion in data science. We believe that building inclusive algorithms and products starts with building developer teams that are “drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct.”

The opportunity here is huge. Studies have shown that managers display “affinity bias” in hiring people who are like them. Talent acquisition professionals may perpetuate these biases to source more successful candidates. Building AI tools with fairness at the core can help surface strong non-traditional candidates by bypassing human biases around qualifications and competence. Businesses stand to benefit by hiring great employees and becoming more diverse along the way.

At Atipica, the people building the algorithms and UX prioritize building an inclusive team and product. We are constantly thinking about inclusion. Our algorithms come from diverse training sets and we select features less likely to be biased. We build with an understanding of both the challenges of hiring under-represented candidates and the negative impacts of impostor syndrome and implicit bias. Diversity is in the composition of our team, our models and our mission. That’s Atipica.

To learn more about Atipica, email us at success@atipica.co.

 

Leave a Reply