What should workplace policies on generative AI look like?

'It is critical that you go in with eyes wide open and you're also aware of the risks,' says legal expert, offering tips for HR

What should workplace policies on generative AI look like?

Over half of employed Gen Z New Zealanders are using generative AI, despite limited understanding of the risks associated with its use, according to a recent survey.

Despite one-third of Gen Z respondents saying they’ve used it for work, only 1 in 5 are aware of cyber security or privacy risks associated with AI use, finds the survey commissioned by Kordia, carried out by Perceptive.

Rosemary Wooders, partner at Bell Gully, warns it’s crucial employers take the lead in directing how this type of technology is used in the workplace.

“My view is that employers would be remiss if they did not take advantage of the various benefits that generative AI carries,” says Wooders, an employment law and privacy and data specialist. “It obviously has such a great potential to increase productivity significantly within the workplace, but as an employer, I think it is critical that you go in with eyes wide open and you're also aware of the risks and how to mitigate those.

“If employers are allowing employees to utilise this technology in the workplace, at the very least, they should be considering implementing a policy to provide some form of control over what employees are and are not using generative AI for.”

Fast-moving world of Gen AI creates additional risks

Wooders recommends a policy to address the issues associated with generative AI as opposed to covering this in an employment agreement since it offers the flexibility to adapt and update in a fast-changing environment.

“If you’d asked me back in November last year what I thought of ChatGPT, I probably would have given you a blank look - as I think the majority of the population would have done - so it's come on in leaps and bounds over the past year or so.”

The policy could start off with basic rules around how employees are using generative AI, she advises, like checking for accuracy, making sure employees aren’t feeding the tool confidential information, IP or personal information.

“My core message would be, if such tools could be capable of being used within the workplace, employers should be getting on top of what these tools can do and have an educational piece as well - perhaps some training for employees - and definitely get that policy in place.”

Controlling use of Gen AI in the workplace

A workplace policy on use of generative AI should cover off a number of issues, she says. One is the extent to which artificial intelligence can be used at work.

“There should be, I think, a requirement in that policy that sets out or expressly acknowledges the outputs that are created by generative AI. So, for instance, if I were to use generative AI to create a legal memo, there should be an acknowledgment in that memo somewhere that I have utilised generative AI.”

It should also show which parts it was used for, says Wooders, and it is critical as well to include a requirement for any employees to independently review any information produced by generative AI.

User be warned: false legal cases produced by Gen AI

When people receive material produced by generative AI in response to a question, she says, on average about 60 to 70% of the material that may be produced tends to be accurate.

“Of course, that's only an average. So, in some instances, it may be fully accurate, but you do need to allow for that error rate.”

Earlier this year, the New Zealand Law Society said its librarians had been receiving queries for a number of case names that turned out not to exist.

“That's because generative AI tools simply made up these cases,” says Wooders. “Also, overseas, various lawyers have been criticised as a result of using fictitious names in legal submissions. So accuracy and checking is critical.”

The third aspect Wooders considers crucial to cover off in a workplace policy on the use of the technology are restrictions regarding the input of personal information, confidential information and intellectual property into generative AI tools.

“When you're a user of generative AI tools, any inputs into the generative AI tool will be effectively stored and then if another user comes along and wants to create a similar type of document, then that information could end up in the hands of a user that's completely unrelated to a particular employer.”

Training required in correct use of AI

It's also important to cover in a policy that employees are properly trained on the use of AI, including how to make the most of it, but also those particular risk areas.

IP is a crucial area, with the potential for information to end up in the hands of another user. Wooders knows of one legal database that individuals usually pay money to access, but a generative AI tool actually accessed and managed to absorb the information.

“There are a few cases going on overseas where creators of various generative AI tools are being sued for alleged breaches of intellectual property so it will be fascinating to see how those cases pan out,” she says.

Recent articles & video

MPI to disestablish 391 positions: reports

Can you withdraw a termination notice and replace it with a disciplinary investigation?

How much is bullying and harassment costing employers?

How are employers responding to the Israel-Hamas conflict?

Most Read Articles

Job applications in New Zealand surge amid public sector cuts: reports

Government urged to bring back paid placements amid workforce shortage

Overpaid employee must repay more than $8,000