Reports reveal small incidents where staff misused Microsoft Copilot for work
New Zealand's Department of Corrections said the utilisation of AI tools outside of their intended use is "unacceptable," following reports of misuse of the technology among employees.
Toni Stewart, chief probation officer of the department, told Radio New Zealand that they identified some instances where employees misused Microsoft Copilot at work.
"We've taken action as soon as we've become aware of these instances and made it extremely clear that any use of Copilot outside of its approved use is unacceptable," Stewart said.
Copilot at Corrections
Microsoft Copilot was introduced to the Department late last year as part of its existing Microsoft 365 licence, according to the RNZ report.
About 30% of employees in the Corrections Department have engaged with Copilot since it was introduced in the organisation late last year.
But while uptake remains "relatively low," Stewart noted that the use of Copilot is governed by its AI policy.
"The policy is explicit that personal information, including any identifying details, health or medical information, or details relating to people in Corrections' management, must not be entered into Copilot Chat," said Toni Stewart, chief probation officer of the department.
Aside from an AI policy, the department also has an AI assurance officer who is tasked with ensuring safe and secure adoption of the technology.
It also has an AI working group that provides formal governance of AI, such as embedding safe and ethical AI practices across the department and providing consistent advice on its safe use.
"Our leaders, particularly within Community Corrections where staff write a number of reports, are actively working to ensure proper AI use is an ongoing conversation with staff," Stewart said.
"Staff are regularly reminded of the AI policy and other relevant guidance."
The department also underscored its commitment to maintaining privacy, with privacy teams working to provide guidance on the use of Copilot in the Community Corrections space.
"We are committed to protecting the privacy of the people we work with and maintaining the professional integrity of our assessments, reports, and case documentation," Stewart added.
The New Zealand government unveiled last year guidelines that ensure the safe and responsible use of AI tools in the public sector.
"It enables agencies to explore and adopt GenAI systems in ways that are safe, transparent and responsible, and which effectively balance risks with potential benefits of these systems," the guidance read.