Stakeholders call for government regulation on AI

'The risks and uncertainty have reached such a level that it requires an acceleration also in the development of our governance mechanisms'

Stakeholders call for government regulation on AI

Several stakeholders are calling on the federal government to put a proper system in place to govern the use of artificial intelligence in public sector workplaces.

Ottawa has an opportunity to make considerations early before AI use develops out of their control, according to one expert.

The government must regulate AI “in a way that fosters citizen confidence and addresses all the issues,” said Karen Eltis, a law professor at the University of Ottawa and an expert in cyber security, in a CBC report.

Earlier this year, Ottawa noted that it is increasingly looking to use artificial intelligence to make or support administrative decisions to improve service delivery.

“The government is committed to using artificial intelligence in a manner that is compatible with core principles of administrative law such as transparency, accountability, legality, and procedural fairness,” the government said in its Directive on Automated Decision-Making. “Understanding that this technology is changing rapidly, this directive will continue to evolve to ensure that it remains relevant.”

However, the Professional Institute of the Public Service of Canada, is warning the government against overreliance on technology.

"When we make an over-reliance on technology, AI in particular, to make decisions, we think that it's the panacea and that we can cut workforce," said Jennifer Carr in the CBC report.

"When we switched over to Phoenix, we got rid of our pay and compensation advisers because the system would do more and more decisions automated, but that didn't work out and you know we are still paying for it seven years later."

In the U.S., senators have already introduced legislation that would govern the use of AI and bots in the workplace.

For your consideration

Previously, Yoshua Bengio, an expert in AI, also called for governments to consider the dangers that AI could pose.

“There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values,” he said. “The short and medium term risks – manipulation of public opinion for political purposes, especially through disinformation – are easy to predict, unlike the longer term risks – AI systems that are harmful despite the programmers’ objectives – and I think it is important to study both.”

Geoffrey Hinton, the former Google executive dubbed as “the godfather of AI”, also said that advancements around AI technology are pushing the world into “a period of huge uncertainty”, and it’s even possible for the technology to develop a desire to control humans. He listed six dangers that AI pose to humans overall.

Bengio noted: “There is an urgent need to regulate these systems by aiming for more transparency and oversight of AI systems to protect society. I believe, as many do, that the risks and uncertainty have reached such a level that it requires an acceleration also in the development of our governance mechanisms.”

AI use in government workplaces

Some government workers are already using AI in their professional life.

Overall, more than 10% of Canadian public servants said they have used AI tools such as ChatGPT in their work, the Global Government Forum reported in May, citing its survey of over 1,300 federal employees. 

Over six in 10 (61%) of officials were either excited or positive about the opportunity to use AI to process large amounts of data. Nearly half (48%) were also either excited or positive about the opportunity to use it for real-time analysis and monitoring of public service delivery, such as improving healthcare services.

There is still some hesitation among workers to use AI in making hige decisions. Over two-thirds of employees, for example, are uncomfortable letting AI handle layoffs, according to a previous research from Capterra.

In the U.K., however, one company has decided to let AI take an executive position. Hunna Technology, a healthtech startup, unveiled IndigoVX – a piece of AI technology – to act as the company's CEO.

Recent articles & video

Abbott's EVP of HR: 'What do they need that we can help with?'

'HR leadership is a strategic enabler of a company's success'

Human Rights Tribunal awards highest damages ever for workplace sexual harassment

CPHR announces partnership with CIPD

Most Read Articles

CRA reviewing benefits of 200,000 Canadians

Loblaw investing over $2B into 7,500 new jobs, ‘discount stores’

Human Rights Tribunal awards highest damages ever for workplace sexual harassment