Is it impossible to overcome hiring bias?

We've tried education, 'blind' hiring, and AI-based HR tech – with inconsistent results. What now?

Is it impossible to overcome hiring bias?

Recruitment bias – can’t say it’s the biggest elephant in the HR room, but it is a massive concern.

For years, companies have tried everything from raising awareness on unconscious bias to something called “blind recruitment”, as well as evangelising technology as the be-all-end-all for HR’s eternal problems.

Some methods have made headways in resolving this pesky problem but nothing’s truly stuck. HRD investigates.

Familiarity trumps diversity
Bias has probably existed as long as humans have. The key difference now is there’s more chatter around the matter – a giant first step for mankind as we focus on the next one of eradicating it from our lives, professional or personal.

According to research commissioned by The Open University, three in ten (29%) senior managers admit they hire people just like them. The study also found that employers place significant importance on educational attainment (86%), cultural fit (77%), tastes and leisure pursuits (65%), and even social background (61%).

That’s just the start of the story. Recruitment bias doesn’t just judge on a candidate’s “contents”, it can extend to something as superficial as his or her “cover”.

Research from LinkedIn found that 60% of recruitment professionals believe a bias against tattoos and physical image has decreased over the years. However, nine out of ten employers think a candidate’s tattoos could limit their career progression.

A further 75% of those asked believe a candidate’s overall image plays a large part in the hiring process, whilst 88% believe that a tattoo could limit their chances.

Additionally, four out of ten recruitment professionals admitted to rejecting a good fitting candidate because they had a noticeable tattoo.

When they were quizzed on their inking bias, 47% said it was because of an industry-wide intolerance, whilst 46% said it showed a lack of professionalism.

There’s hope yet, however. Those interviewed believe that phone interviews would be a great way of reducing bias, with 28% considering VR assessments an additional method of cutting back on discrimination.

Is tech our true saviour?
AI tools like VR assessments does sound like a good idea. Any conscious and unconscious bias plaguing human hiring managers can turn into a non-issue when you’re not facing a candidate – but what if it is still an issue?

Just last year, tech giant Amazon.com Inc scrapped plans to use an AI recruiting tool because instead of bridging the industry’s infamous gender gap, it was perpetuating it. Amazon’s machine-learning specialist found that their new recruiting engine was turning away women applicants for technical roles.

The team had been secretly building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent. But by 2015, the company realised its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

“Everyone wanted this holy grail,” one person familiar with the effort told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

What’s interesting is the true culprit for the AI bias: Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, which means the AI tool was just doing “as it’s told” in showing preference for male candidates.

Amazon edited the programs to make them neutral to gender-specific keywords and terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, shared the Amazon insider.

“Machine learning” learns from us
Nobody’s discrediting technology as a solution. We just need to be more aware that tech is still a tool guided and controlled by humans. Especially when most programs and tools are still in their early BETA stages.

Jessica Dourcy, chief happiness officer at Palo IT, shared a very strong cautionary example at HRD’s recent HR Tech Summit in Singapore.

Three years ago, Microsoft launched a Twitter chatbot named Tay. It was meant to be a product that can have “casual and playful” conversation with Twitter users. Microsoft said that the more users chatted with Tay, the smarter it gets.

It started with tweets of being “stoked” to meet and chat with people as “humans are super cool”.

In less than 24 hours, the innocent AI chatbot turned dark, saying awful things like it “hates feminists and they should all die and burn in hell”, making anti-Semitic comments, as well as blasting thoughts about building a wall between the US and Mexico.

“It’s extremely scary so Microsoft had to pull out…Tay the chatbot,” Dourcy said. “Now how did that happen? Well, basically the way the AI functioned is by mirroring what [data] human beings entered.

“And what some people tried to do is feed Tay with information that obviously was only representing just 1% to 2% of the population’s view.

“But it's a very, very scary tool, ultimately because we [are supposed to] use AI to make our lives easier. But if it’s used for evil purposes, you end up having a very cute chatbot, being excited about meeting humans, to 24 hours basically hating all women and asking for Hitler to be back.”

The lesson here is that as much as tech can help solve some of our greatest woes, it will only be as perfect as its human creators. And since humans aren’t perfect, tech tools won’t either. But that doesn’t mean we can’t “do better”, right?

Recent articles & video

Grocery store faces criticism after 2 teen workers poisoned at work

Over 2 in 5 young workers want to retire before 55

B.C. operations manager resigns, disputes compensation in court

Shortage of skilled workers makes for higher cost of living, say experts

Most Read Articles

Nearly three-quarters of middle managers in Canada experiencing burnout: survey

Budget 2024: Public service to lose 5,000 workers

Alberta launches new compensation model for doctors