AI generates false allegations of wrongdoing at KPMG, Deloitte, PwC and EY

Group of academics uses Google Bard tool for submission to parliamentary inquiry

AI generates false allegations of wrongdoing at KPMG, Deloitte, PwC and EY

A group of academics is apologizing to the big four consultancy firms – KPMG, Deloitte, PwC and EY – after admitting they used artificial intelligence to make false allegations of serious wrongdoing in a submission to a parliamentary inquiry, according to the Guardian.

The academics, who specialise in accounting, were urging a parliamentary inquiry into the ethics and professional accountability of the consultancy industry to consider broad regulation changes, including splitting up the big four.

For part of the original submission, the Google Bard AI tool was relied upon, and it generated several case studies about misconduct that were highlighted by the submission.

The original submission falsely accused KPMG of being complicit in a “KPMG 7-Eleven wage theft scandal” that led to the resignation of several partners, said the Guardian. It also accused KPMG of auditing the Commonwealth Bank during a financial planning scandal. KPMG never audited that bank.

The submission also wrongly accused Deloitte of being sued by the liquidators of the collapsed building company Probuild for allegedly failing to properly audit its accounts. Deloitte never audited Probuild.

The submission raised concerns about a “Deloitte NAB financial planning scandal” and wrongly accused the firm of advising the bank on a scheme that defrauded customers of millions of dollars, said the Guardian. Deloitte told the Senate there was no such scandal.

It also accused Deloitte of falsifying the accounts of a company called Patisserie Valerie. Deloitte had never audited the company.

“It is disappointing that this has occurred, and we look forward to understanding the committee’s approach to correcting this information,” said Deloitte’s general counsel, Tala Bennett.

The sections of the submission that contained false information generated by artificial intelligence will now been removed. A new document is expected to be uploaded to the Senate inquiry website, said the Guardian.

Emeritus professor James Guthrie claimed responsibility for the error, excusing the other academics.

“Given that the use of AI has largely led to these inaccuracies, the entire authorship team sincerely apologises to the committee and the named Big Four partnerships in those parts of the two submissions that used and referenced the Google Bard Large Language model generator,” he said in the letter.

While the factual errors were “regrettable,” Guthrie also insisted “our substantive arguments and our recommendations for reform remain important to ensure a sustainable sector built on shared community values”.

Recent articles & video

Manager's email shows employer's true intention in dismissal dispute

Employer or contractor: Court determines liability in workplace accident

Women's rights group criticizes discount retailer for not signing safety accord

U.S. bans non-compete agreements

Most Read Articles

Manager tells worker: 'Just leave, I don't want you here' during heated exchange

Worker put on forced annual leave amid employer's legal dispute with landlord

Why human skills are critical in the era of AI