‘Over-promise and underdeliver’: Why so many AI projects fail

Research shows 80% of AI projects fail. An Australian professor shared with HRD how to avoid the pitfalls

‘Over-promise and underdeliver’: Why so many AI projects fail

How to harness the power of artificial Intelligence (AI) is one of the biggest issues companies face and research suggests many of them are getting it wrong. 

Research released by Rand found about 80% of all AI projects fail because of a struggle in understanding the technology’s potential.

“Understanding how to translate AI's enormous potential into concrete results remains an urgent challenge,” the research found.

“Industry stakeholders often misunderstand — or miscommunicate what problem needs to be solved using AI. Second, many AI projects fail because the organistion lacks the necessary data to adequately train an effective AI model," the report said.

Professor Nicholas Davis, co-director of the Human Technology Institute at the University of Technology Syndey (UTS), told HRD to succeed with AI businesses need to better plan and consult and not treat workers an "invisible bystanders".

“You need champions in your organisation – but you also need everyone on board with what you’re doing, especially if you’re trying to find a solution to a key pinch point,” Davis said. “You need to communicate and let everyone know how and why this course of action is being taken. If you can't do that, you are going to have a bad time and you're probably going to lose money" he said.

“Everyone has good intentions, everyone’s positive that the system will perform well, but without proper understanding of how it’s going to work in day-to-day use, you’re not going to get anywhere. This is why stakeholder engagement is so important – especially internal. Younger people are already using AI in ways you don’t even know about, so tapping into their potential and understanding can be invaluable. It takes time, but you’d rather take time than get it wrong.”

Professor Davis, who describes AI as the "beginning of the fourth technological revolution", told HRD about the mistakes companies can make when implemneting AI.

“What we're seeing is organizations that attempted to deploy AI in functional areas where the problem is quite complex and involves people in multiple factors. You create a solution, and then you scale it across millions of people or 1000s of customers - or allocate all your marketing spend - and suddenly the performance of that system is less than perfect and becomes a big problem. A lot of the time, AI can be seen to overpromise and under-deliver, but it's also down to us to get it right.”

"We're at a pivotal moment where we can have a direct impact on how AI is used before it becomes part of the fabric of organisations globally. That's when it becomes second nature and it's harder to change."

Choose enduring problems to utilise AI

Rand’s research recommends leaders choose “enduring problems” to use AI because they require time and balance to complete – stating “leaders should be prepared to commit each product team to solving a specific problem for at least a year".

This is a sentiment echoed by Davis, who said the technology can take time to get things right.

“AI has been around for a long time – but it’s generative AI that’s getting people super excited. But it also opens problems around what people think they can do with it and what they’re able to try to do with things. It’s hugely empowering but from a governance perspective, it could cause more problems.”

“You need to care about the accuracy of how you’re using this technology,” Davis added. “The second thing is, you need to be careful when you decide if you can actually scale the solution across the business and let people go as a result. You could reach a point where the problem of reversing the change becomes bigger than the initial problem.”

Davis also noted to HRD that AI shouldn’t be considered “magic” – because it has to be implemented in the right way from the executive team.

Implementing AI across all stages of the business

Successful implementation of Ai happens when all levels of the business are consulted and aware of changes being made, Davis said.

Additional research from UTS, finds that workers can often be “invisible bystanders” in relation to AI adoption – with them not being consulted about development, training, or deployment of systems – something that needs to change.

By engaging workers on issues, they are able to provide valuable and nuanced insights into many issues raised by the systems – including ethical, legal and operational, the findings suggest.

Issues with implementing AI without foresight

Like any initiative a company wants to implement, planning is key – and the use of AI is no exception, Davis outlined.

“You need to focus primarily on three things – outlining performance risk, malicious use, and external experts. What that means is looking at how much risk the new system could amount to, looking at if the system can be hacked, for example, and making sure it’s thoughtfully governed,” Davis added.