As AI tools enter daily workflows, Nicholas Ngo of TSMP Law Corporation shares what HR should know to manage legal risks
A recent study by cybersecurity firm Cyberhaven revealed a troubling trend: nearly 35% of the data that employees feed into AI tools like ChatGPT would be classified as sensitive.
This includes HR records, source codes, and proprietary R&D material. The same report highlighted that mid-level employees, who are often trusted with operational autonomy, are the most frequent users of these AI tools.
With AI platforms becoming a fixture in daily workflows, this pattern creates legal exposure on multiple fronts.
Companies risk violating confidentiality agreements, breaching internal policies, and even falling afoul of the Personal Data Protection Act 2012 (PDPA) if sensitive personal information is improperly disclosed.
Despite these risks, many organisations still lack dedicated AI-use protocols. Nicholas Ngo, Director at TSMP Law Corporation spoke with HRD Asia to explain the real risks of unchecked AI use at work and how HR can take charge before policies and trust break down.
According to Ngo, one of the most serious threats to companies’ confidential information comes from employees inputting confidential material into AI platforms. These can include everything from source codes and product strategies to confidential client lists.
"When data is uploaded into a publicly available AI platform, it may be used to train the system’s learning model," he says. "This means another user could receive an output that indirectly draws from your confidential material."
He adds that AI tools don’t discriminate between data types. Once confidential or sensitive information is entered, the company effectively loses control of where it might resurface.
Yet, many employees aren’t aware of this risk because AI tools are widely accessible, lack immediate restrictions, and deliver answers with deceptive discretion.
"It’s easy to copy and paste into ChatGPT or other tools. And without clear boundaries or rules, employees may use these platforms without realising the impact," Ngo explains.
Employment contracts often contain broad confidentiality and IP clauses, but are they enough?
"Traditional clauses can theoretically cover AI misuse, but they’re often too general to raise employee awareness," Ngo explains.
"If a clause says 'don’t disclose confidential information,' an employee might not realise that uploading it into an AI tool counts as disclosure."
This disconnect has given rise to what some refer to as "shadow AI," referring to AI tools which employees use that actually aren’t approved or monitored by the organisation.
To address this, Ngo recommends updating contracts and policies to explicitly refer to the relevant restrictions on prompt-based data sharing and AI platforms.
"Calling it out directly matters. A clause that says, for instance, 'disclosure on external AI platforms without approval is prohibited' can shift behaviour more than abstract or general language requiring the protection of confidentiality," he says.
While the PDPA covers personal data, it’s only one piece of a larger puzzle. Ngo emphasises that various other categories of data are not protected by the PDPA.
"PDPA only deals with personal data, and even then, we have to distinguish between personal data that receives substantive protection and other personal data," he says.
"So technically, there's the huge body of data… A subset of that is personal data, and then there are three further subsets to personal data. One is personal data that has substantive PDPA protections, second is publicly available personal data, and third is business contact information."
Only the first category: non-public, non-business personal data, triggers substantive obligations under the Act.
"When it comes to personal data breaches, you need to assess whether the data breach meets the threshold of significant harm or scale," he says. "If it does, companies must notify the authorities and affected individuals."
For other types of data, however, there is not always a clear statutory path. Ngo warns that once information has been ingested into a public AI system, it’s nearly impossible to retrieve or restrict its further use.
"You’re no longer in control," he says. "That’s why taking preventive action is more effective than relying solely on enforcement."
According to Ngo, Singapore has not yet enacted AI-specific legislation, but guidance from the Personal Data Protection Commission (PDPC), Infocomm Media Development Authority (IMDA) and other regulatory bodies offer benchmarks for fair AI usage.
When an employee violates AI usage expectations, whether disciplinary action holds up again depends heavily on the relevant documentation.
"Start with the employment contract," Ngo says.
"Does it require the employee to follow policies as they evolve? That’s your foundation."
Next, the policies themselves must be robust and clearly communicated. "Training sessions and written acknowledgements help ensure that employees can't claim ignorance later on. It’s not just about punishment. It’s about clarity and fairness," he explains.
Ngo recommends a layered approach: contractual obligations, clear internal policies, and frequent training to create defensibility and accountability.
"You’re not just plugging gaps after a breach. You’re actively reducing the likelihood of one occurring, which is ultimately more important," he adds.
In companies where departments handle different kinds of data, defining "sensitive" information isn’t always straightforward.
"Legal should guide the overall framework, defining what counts as protected or confidential," says Ngo.
"But department heads should understand what is competitively sensitive or operationally crucial in their operations."
HR, he says, plays a unique role in identifying employee behavior trends, managing training, and reinforcing best practices across departments. "It’s a collective responsibility, and without collaboration, policies often miss critical blind spots."
Ngo also notes that the choice of AI tools matters. Free versions usually retain input data and use it to train their models. Enterprise-grade versions, by contrast, offer stronger security and compliance assurances, often with the promise of no training for the AI tool.
"If your company insists on banning all AI tools, employees may find workarounds. But if you give them a secure, approved version, they’re more likely to follow the rules," he says.
Ngo warns against implementing AI tools before legal and compliance teams have vetted them. The consequences, he says, can be immediate and far-reaching.
"One scenario is data leakage, such as sensitive business information or designs being recycled into public outputs," he explains. "Another is reliability... We’ve seen legal submissions that relied on fabricated case law from generative AI tools, essentially hallucinations. Courts don’t tolerate that."
Ngo cites recent cases in both Singapore and abroad where users, including lawyers, faced consequences for using AI-generated content without verification.
"It’s tempting to treat AI output as plug-and-play. But the reality is, hallucinations still happen, and fact-checking is not optional," he says.
If employers want to future-proof their agreements, Ngo recommends adding targeted language around AI tool usage without limiting flexibility.
"You can’t fully predict how AI will evolve," he notes. "But you can expand your confidentiality clauses to say that disclosure to external AI platforms counts as a breach unless specifically approved."
He suggests including a flexible definition of AI tools, covering not only current platforms like ChatGPT, Gemini, or Copilot, but also future iterations with similar functions.
"Keep the language broad enough to adapt, but include non-exhaustive examples to make it relatable for employees. That balance helps avoid ambiguity while staying prepared," he adds.
Ngo also points HR and legal teams to the PDPC’s advisory guidelines on AI, which outline how personal data may be used and under what circumstances.
"In this domain, legal and tech expertise must work hand in hand. Reviewing documentation is just one part. You also need to understand how the technology processes and stores the data, and implement appropriate technical security measures."
Ngo believes that companies assessing new AI tools should evaluate not only the legal terms but also the underlying technology.
"Yes, consult your lawyer, but legal insight alone isn’t enough," he says. "You also need to assess how the technology works, where the data is stored, and whether the protections are sufficient. That’s where the tech side becomes just as important."
He recommends involving IT and cybersecurity teams early in the vetting process. "Legal can read the policy documents, but your tech experts will know whether those promises have been adequately reflected in reality."
Ngo leaves HR and compliance leaders with a clear takeaway: preventing a potential breach is far less painful than managing one that’s already happened. But to make that possible, AI compliance must be treated as a company-wide responsibility, not just a legal checkbox.