Human decisions in recruitment mirror AI bias

Subtle algorithmic preferences swayed hiring choices across job types

Human decisions in recruitment mirror AI bias

People making hiring decisions mirrored the racial biases of artificial intelligence systems when reviewing job candidates, even when those biases were moderate, according to a University of Washington study presented 22 October at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society in Madrid.

The research examined how 528 US participants responded to simulated AI recommendations while screening applicants for 16 different positions, from computer systems analyst to nurse practitioner to housekeeper. Researchers created scenarios with varying levels of racial bias in the AI suggestions for résumés from equally qualified white, Black, Hispanic and Asian men.

When participants made selections without AI assistance or with neutral AI, they chose white and non-white applicants at equal rates. However, when the AI showed moderate bias toward either white or non-white candidates, participants followed those preferences. In cases of severe bias, human reviewers made only slightly less biased decisions than the AI recommendations, selecting the AI’s preferred candidates roughly 90% of the time.

Human review remains standard practice

The findings raise concerns about a common workplace practice. Survey data shows that 80% of organisations using AI hiring tools report they do not reject applicants without human review, making this human-AI interaction the dominant model in current recruitment.

“Our goal was to take a critical look at this model and see how human reviewers’ decisions are being affected,” said lead author Kyra Wilson, a UW doctoral student in the Information School. “Our findings were stark: Unless bias is obvious, people were perfectly willing to accept the AI’s biases.”

The study comes as artificial intelligence increasingly shapes hiring practices. Organisations now use AI to draft job listings, while applicants use chatbots to create résumés and cover letters. AI systems sift through applications and pass recommendations to hiring managers, with some using AI avatars for screening interviews.

Study design addressed research challenges

Participants reviewed job descriptions and résumés for five candidates: two white men, two men who were either Asian, Black or Hispanic, and one unqualified candidate included to obscure the study’s purpose. The four qualified candidates had equal credentials. Names such as Gary O’Brien and affiliations like Asian Student Union Treasurer signaled race.

Across four trials, participants selected three of five candidates to interview. The first trial provided no AI recommendation. Subsequent trials featured neutral, severely biased or moderately biased AI suggestions. The team simulated AI interactions rather than using real systems to control bias rates, based on their 2024 study of three common AI systems.

“Getting access to real-world hiring data is almost impossible, given the sensitivity and privacy concerns,” said senior author Aylin Caliskan, a UW associate professor in the Information School. “But this lab experiment allowed us to carefully control the study and learn new things about bias in human-AI interaction.”

Potential solutions

The research identified possible interventions. Bias dropped 13% when participants began with an implicit association test designed to detect subconscious bias, suggesting such tests in hiring training may help. Educating people about AI limitations could also improve awareness.

“People have agency, and that has huge impact and consequences, and we shouldn’t lose our critical thinking abilities when interacting with AI,” Caliskan said. “But I don’t want to place all the responsibility on people using AI. The scientists building these systems know the risks and need to work to reduce systems’ biases. And we need policy, obviously, so that models can be aligned with societal and organisational values.”

The research was funded by the US National Institute of Standards and Technology.

LATEST NEWS