Will AI use put your organisation at risk?

Employees using AI can risk sensitive data ending up in the public domain and privacy breaches, says lawyer

Will AI use put your organisation at risk?

As rising numbers of employers look to use AI in the workplace, one lawyer warns that caution is needed.

While acknowledging there are some advantages to be gained from the tech, Milly Hung, partner at Stevenson, Wong & Co, says that unless organisations put policies in place to regulate AI use, there are very real risks sensitive data could end up in the public domain and privacy is breached in terms of employee information.

“I think it is a very hot topic most companies have to face nowadays, especially when they're dealing with a lot of personal data – no matter whether it’s employees or clients – and sensitive company information,” says Hung, who’s based in Hong Kong.

Measures need to be taken urgently to ensure AI use doesn’t compromise organisations, she says.

Lack of action on AI risks data leakage

“Check and balance is important at the start-up stage to avoid cyber-leakage and protect personal data because if we miss the golden period to set the common code, it will run like the water under the bridge,” says Hung, who emphasises that a wait and see approach will be too late.  

“If someone’s using AI to make an analysis or draw up a proposal, it's quite easy to get a leakage of data because you just don’t know where that kind of data is being stored or what kind of control you have over that.

“Most of the time, if the quantities are very great, they will be stored in the cloud, but they won’t know where the cloud is, so which country’s law they are subject to would also be unknown.”

While AI use is in its starting phase in Hong Kong, she says, if organisations don’t take action now, it could easily be too late to protect their information. It's especially important to address this with company policies because “nowadays the law is not developing as fast as the IT world and in the last one or two years, it’s developed at such a surprising speed,” says Hung.

Revision of employment handbooks for AI advised

Before you decide to use AI in your daily operations, you have to understand the mechanism and then make up your mind to review your internal policies and decide what kind of protections are needed, she says.

Making revisions to employment handbooks – which typically provide general guidance to employees around the company’s workplace policies, among other aspects - regarding use of AI is strongly advised before employees start to use the technology in the first place, she says.

“To avoid data breaches and enhance cybersecurity, employment handbooks can include policies on confidentiality, data protection rules and guidelines for employees in different scenarios,” says Hung. “Clear security and data protection guidelines have to be established for handling clients’ and employees’ information.”

Hung already knows of some organisations in Hong Kong putting policies in place specifically for AI, including those involved in money handling businesses.

AI useful for routine work

In terms of what a policy should include, that depends on the extent the company decides to use AI in their daily operation, she says.

“The first thing is the company has to make up their minds what kind of AI they will use. They should then communicate with their own legal advisors to try to fulfil the internal policies before they make the AI applications a reality.”

In the legal world itself, Hung sees the use of generative AI as being limited. “Hong Kong is a common law country, therefore you cannot rely on AI to do the analysis on case law.

“It may be useful for some routine work, for example, it could be okay to generate some simple contracts or tenancy agreements, but if you totally rely on AI to give you a comprehensive analysis, it is very risky.”

Employee training in AI use vital

Knowing whether the information it produces is reliable is difficult, says Hung. “How can one make sure the data fed up to the AI is 100% accurate?” she says. “If it is not accurate, it can create a great problem.”

Once an organisation’s decided on the way AI can be used internally, the next priority is to organise training for employees around the policies, she says.

“Training is very important. Even if you have guidelines, if people don’t understand to them or apply them every day, it’s a pointless effort.

Policy breaches and data leakage from AI use

“You have to be very clear that nobody should breach the internal policies of the company. If there is a breach, the policy guidelines should cover what happens. For instance, policy guidelines should be set up so that if there is a personal data leakage, people know what they should do to report it. If it happens, cut it short fast. In my experience, time is of the essence.”

The risks for companies that don’t prepare are much more serious than people can imagine, says Hung.

“They will just open the floodgates. If there are no guidelines, once the personal data or information leakage has already taken place, how can you get it back? For employers, it’s very important to make that clear.”

 

Recent articles & video

Malaysia's hiring activity up 8% annually

Hong Kong's return-to-work programme expands to two more sectors

Illegal employment: Asylum seeker claims compensation for workplace injury

Fewer employees feel confident in their job search amid 'strategic' hiring

Most Read Articles

Nearly all Singaporean firms prioritising ESG reporting ahead of global disclosure rules

How employers should prepare for mandate on flexible work arrangements

MOM: Employees discriminated against by AI can now report to authorities