Should HR drop ‘gut feel’ from decision-making?

There’s still an element of subjectivity involved in data-based decision making, says one top academic.

Should HR drop ‘gut feel’ from decision-making?
Conventional thinking posits that algorithms are objective, efficient and remove bias from the decision-making process.

For example, an increasing number of businesses are using these algorithms to identify their ‘ideal candidate’, believing that doing so removes manager bias from the process. However, there’s a problem.

Associate professor Uri Gal from the Discipline of Business Information Systems at the University of Sydney Business School, suggests these same algorithms might be rejecting candidates without HR being aware of it.

“Ostensibly, the process is meant to be objective and rational,” Gal said.

“But the problem is that algorithms are not really objective in the sense that collecting big data actually involves a significant amount of human judgment.”

Gal cited an example. Consider a manager who wants to assess the performance of his or her employees over the previous year.

There are a number of ways which the manager can do that, including collecting data on how much revenue they have generated for the company over the last 12 months, how many clients they have interacted with or how many leads they have generated.

“So we have three data points to access what we call ‘performance’, but why these three? We could have used any other combination of data points,” said Gal.

He suggested other alternatives could equally be valid, such as feedback from customers, feedback from colleagues, how much time they spent on email, or how many days they were absent from work.

Further, when algorithms rely on inaccurate, biased or unrepresentative data, they may systematically undermine racial and ethnic minorities, women and other historically disadvantaged groups.

“There are a variety of different data points we could have used and the choice of data points inevitably involves human judgment,” said Gal.

“So it’s not really objective in the sense that people often think when they are talking about algorithmic decision-making.”

An additional contentious issue is the weight applied to each data point. In the above example, does each of the three data points deserve equal value or importance? Or is one more important than the others?

“We might or we might not decide that one data point is twice as important as the other one and the third one is only half as important as the second one, which again involves human judgment,” Gal said.

Moreover, he suggested there is no one correct way to assess the performance of employees.

“This definitely involves human judgment which is subjective rather than objective,” he said.

Aaron McEwan of CEB agreed there are some dangers inherent in an over-reliance of data analytics in decision-making. While he is an advocate of the use of HR data analytics, he uses the saying ‘garbage in, garbage out’ to highlight the risk.

“About 54% of heads of HR or heads of data analytics are concerns about the data quality,” he said.

It’s also critical to understand what decisions HR is trying to shape with the data. Where is the organization going and what data is going to help inform decisions about that direction?

“That’s the most important thing: if you are asking the wrong questions you are going to be collecting data that doesn’t actually support the decisions you are trying to make,” McEwan said.

As to the ‘human element’ involved in data-based decision-making, McEwan agrees there is a genuine risk of poor choices being made. For example, a manager might make a decision about whether or not someone has the potential to be a high performer.

Does the manager start collecting data that confirms the initial hypothesis and reject data that doesn’t confirm it?

“There is always a human element both in the selection of what data we choose to analyse and then also in the analysis of that data,” he said.

However, the alternative – and the traditional means by which decisions were made – is not ideal.

“Traditionally you would have had a group of leaders with their own personal biases sitting around making subjective decisions about people’s future in terms of pay rises, promotions and so on. The addition of objective data, even if it’s not perfectly objective, I think increases the accuracy of these decisions.”

The key, he adds, is evidence-based collection practices and evidence-based analysis practices to ensure at least some of the subjectivity is minimised.

Gal also does not dismiss analytics and says there is “room” for having certain technologies in place to assist people making decisions. However, he urges a “more aware approach” in what these technologies are being used for.

“It’s fine to have certain tools in place to help go through large amounts of data because computers and algorithms are more effective at that than human beings. But I think it is probably wise to maintain some sort of human oversight over this process.”

He added that inevitably, algorithms are trying to construct models of human behaviour. These models are always going to be simplified versions of a very complex phenomenon that happens in reality: the way in which people behave in organizations.

Recent articles & video

Manitoba government reinstates 1:1 apprenticeship ratio

Two-thirds of Canadian organizations expecting cybersecurity incident

Training leaders to address chronic pain issues

Employee relocation to another province

Most Read Articles

Province introducing paid sick leave as of Oct. 1

Lecturer fired for misogynistic paper published in his name

Ottawa limiting employers’ access to Temporary Foreign Worker Program