Untrained and uncertain: How gaps in AI adoption create 'trust crisis'

Just offering training will not be enough, according to Australian expert

Untrained and uncertain: How gaps in AI adoption create 'trust crisis'

A lack of formal workplace training in AI isn’t just creating a skills gap; it’s fuelling a crisis of trust as leaders fail to provide clear guidance and support, according to one of Australia’s leading experts on technology transformation.

Katherine Boiciuc, Regional Chief Technology and Innovation Officer, Oceania at professional services firm EY, said workers want to use AI and were already showing signs of unlocking productivity. 

“There’s a trust crisis unfolding in Australian workplaces. Workers are enthusiastic about AI and are benefiting from immediate productivity gains, yet lack confidence as leadership are not providing clear guidance, training or support," Boiciuc (pictured above) said in a statement.

“The reality is that change won’t come from just offering training, it’s about changing the culture. Leaders must go beyond formal training and foster a culture of experimentation, learning and AI fluency."

Boiciuc‘s comments came as EY released new research on how AI is reshaping work, which found more than two-thirds, or 68%, of the desk workers surveyed used AI within the last month.

According to the Australian AI Workforce Blueprint, while the majority of the 1,000 workers surveyed are using AI, there were low rates of formal workplace training, along with patchy worker confidence, and concerns over data breaches and job losses.

One quarter of workers, or 26%, even reported being banned from using the technology.

Only 35% of workers had been trained in how to use AI, highlighting a critical gap between apparent enthusiasm to adopt the technology and the rollout of formal workplace training.

Training gaps were paired with low confidence and some fear among workers, with more than 70% concerned about breaching data or regulatory requirements, 60% worried about losing their critical thinking skills, and 54% concerned about job losses.

More than half of all workers continue to rate their AI proficiency as low, with confidence worse in older generations. Some 46% of Gen Z rate themselves as proficient using AI, compared with 37% of Millennials, 25% of Gen X and just 18% of Baby Boomers.

Risk of untrained employees

“It is never too late to provide upskilling on the effective use of AI,” Professor Kai Riemer from the University of Sydney told HRD.

“Without proper understanding of how it works, people tend to uncritically adopt AI responses that might contain problematic inaccuracies,” said the professor of Information Technology and Director of Sydney Executive Plus, which provides workplace AI training.

Untrained workers using AI are exposed to a number of potential risks, including having the misconception that AI “thinks” and “learns”, Riemer said.

“AI responds like a human and its responses sound convincing, yet it works nothing like a human,” Riemer said.

“We find that people tend to both underutilise the technology and have unrealistic expectations at the same time when they do not properly understand it.”

“Without proper understanding of how it works, people tend to uncritically adopt AI responses that might contain problematic inaccuracies.”

Workers who do not understand the ecosystem of AI tools are also at risk of falling prey to data leaks and data breaches, Riemer said.

“Without understanding the ecosystem of AI tools, people tend to ‘leak data’, potentially exposing companies to data breaches.”

What happens in the AI shadows 

Businesses that fail to train staff are also at risk of “shadow AI use”, according to Shaun Davies, founder and principal of The AI Training Company. Companies may find enthusiastic employees are using the technology without disclosing it to their managers.

"Then you might have problems where data is being inputted into AI systems that may not be as secure as you would like,” he said.

A secondary problem is the undocumented proliferation of tools and processes, which can become difficult for businesses to unwind, Davies said.

“In a medium- to large-sized organisation, you might find that each department has evolved a little culture of using a particular AI tool that's interesting and innovative, but when it comes to procurement and process and risk minimisation, it's not really an ideal situation.”

Davies, whose clients include Google, and who was formerly an AI safety trainer at Microsoft, recommends training for organisations to enhance innovation and safety.

“People usually have some idea of how to prompt, maybe they do it sporadically, but when it comes to knowledge of the more advanced features that are becoming available, the level of knowledge is surprisingly low.

“My goal is always to take people from the beginner’s stage - where they're just interacting with the chatbot and using it in an ad hoc basis - to a more intermediate level, where you're sort of using features … and developing a toolkit where you have the right tool for the right job.”

LATEST NEWS