How can managers use AI ethically?

AI bias may be far more serious than human bias unless it is recognised and controlled

How can managers use AI ethically?

We already see algorithms and hear the headlines about artificial intelligence (AI), yet so many senior managers remain skeptical and ignore the issue, according to Dr Mathew Donald, principal of Dr Mat.

Whilst the robot or cyborg may immediately come to mind, neither will emerge as a single change to immediately replace millions of jobs. Rather this is going to occur in waves over an extended timeframe, added Dr Donald.

“AI will likely be adopted only once it proves to be functional, safe, with favourable business economics and customer acceptance,” he said.

It is likely that the current logarithms and basic AI functions will advance in the upcoming 5G rollout and faster technology, where the first functional robot may soon emerge. 

Dr Donald, who is also the author of Leading and managing change in the age of disruption and artificial intelligence, added that there is potentially a serious problem with AI bias.

“As with humans, AI and robots will have bias that arises from its programmes and functions, where my conference audiences regularly raise AI bias and error as an issue to consider now,” said Dr Donald.

“Some may argue bias exists in humans as well, yet AI bias maybe far more serious unless it is recognised and controlled.

“Sure, every single human has some bias based on their education, family, experiences and socio-economic background, but those are unique to the individual.”

According to Dr Donald, AI bias will likely be systematic, derived from the system and originating programmer, where those purchasing it will have the same base bias that can permeate through organisations, industries and even the world.

“It is possible that bias could remain hidden, so potentially locking out millions of workers without managers even knowing about it,” said Dr Donald.

“Managers today are regularly trained about how to not discriminate despite any inherent bias they may have against the aged, gender, or even people with red hair.”

In the main society values, business rules and the legal system reduce and prevent blatant discrimination and bias in modern workplaces, he added.

AI bias and error may be much harder to understand, identify and prevent than the human equivalent, and much harder to include laws and society values as AI creates new transactions and ways of working.

If left unscrutinised, Dr Donald argues that AI could over time discover new transactions that may have an effect of harming the poor and disabled, or where decisions may be perceived as discriminatory based on empathy and other societal values.

AI may soon assist be in a position to read and assess customer enquiries, with abilities to prioritise, draft answers and even do first verbal responses with recordings, he added.

“This may initially sound great for organisations that want to be efficient and save money,” he said.

“Yet if all your competitors take on the same efficient technology, it is possible that all organisations may choose different responses and priorities for similar customers based on some bias, thus effectively systematically denying some customers a fair hearing.”

Dr Donald  said that one can only imagine the embarrassment and uproar if the public, customers and investors find out that the new AI system is not giving everyone a chance, or is missing the most important calls.

“Sure, the new AI systems may work 24/7 and be hugely efficient, management will still need to consider broader issues before purchase.”

Ethical frameworks for AI bias

In a competitive, fast moving age it may be tempting for management to offer up the efficiencies and savings to their boards, customers and staff.

Although AI adoption without due diligence could lead to value losses, shareholder revolt and customer loss if bias and error are discovered some time after its adoption, according to Dr Donald.

“If AI is revealed with significant error and bias it may also lead to government enquiries and tighter laws, so limiting its development and functionalities into the future,” he said.

“Managers should beware that AI rollout comes with significant risk if there is insufficient due diligence, or if it lacks an assessment against broader societal and ethical frameworks.”

The past business assessment tools of financial paybacks and net present values should no longer be the sole determinants of investment in this new age, added Dr Donald.

Algorithms and AI are already seen daily in online marketing and social media feeds, whilst not a complete human yet, it has already been accused of influencing USA elections.

“One can easily imagine the logic of new AI automatically closing accounts and sending debt collectors when customers do not pay bills, whereas a human may create exceptions for sickness, death and special circumstances, or seek to sustain the corporate image,” said Dr Donald.

“Whilst AI will not replicate the human mind for some time, there is still considerable potential for harm if managers do not inject sufficient controls and governance.

“It is now important that management assess AI holistically, ensuring that new structures and processes are designed with empathy, society values and government regulations when adopting AI.”

Recent articles & video

Talent mobility: What’s the most challenging country for remote workers?

Organisations warned about 'overconfidence' dealing with threats in cybersecurity

Which countries hired the most expats in 2023?

4 in 5 employers redesigning workspace with return to office: survey

Most Read Articles

U.S. proposes legislation pushing 4-day workweek

Over 4 in 10 managers hope AI can replace their teams

Expert calls for quarterly cybersecurity training given threat of human error