'Organizations need to be very intentional around putting protections in place'
The number of employees putting private company information into AI platforms will likely increase if organizations do not put guardrails, policies and controls in place, an industry insider has warned.
A recent KPMG survey found that 24 per cent of users said they've entered proprietary company data such as human resources or supply chain information into public generative AI platforms (up from 16 per cent in 2023) and 19 per cent said they've entered private financial data about their company (up from 12 per cent).
The lack of private AI platforms exacerbates the problem, with only 14% of respondents using a private platform developed by their employer. The rest are using publicly available tools.
Stavros Demetriou, partner and national leader of KPMG in Canada’s people and change practice, believes the solution lies in companies being deliberate about the guardrails they put in place.
Whether through private systems or robust public-use policies, the need for action is clear. Without it, the risks will only grow, leaving companies vulnerable to both immediate and lasting repercussions.
“Organizations need to be very intentional around putting protections in place,” Demetriou said.
According to the KPMG survey, while over 50% of users said their employer encourages the use of generative AI by building it into their project checklists, nearly 40% are not aware of any controls from their employer over its usage.
Demetriou recommended that policies outline how an organization monitors the information being entered from company computers into public AI systems, alongside solutions for when an employee does upload private and confidential data to these systems.
But balancing employee privacy with company security is another challenge, and the key lies in establishing acceptable use standards.
“Depending on your jurisdiction, there are privacy regulations and client agreements every organization needs to abide by,” he explained.” If employees abide by these, they have nothing to worry about. Most organizations are not interested in tracking personal use unless it breaches these rules.”
Another critical element is training and communication and creating ongoing dialogue and education to ensure employees understand both the benefits and risks of generative AI.
Ultimately, the survey’s findings shine a light on the gap between employees’ use of generative AI and the safeguards employers have in place. According to Demetriou, 89% of business leaders surveyed in a separate study said their organizations have implemented strict guidelines on generative AI use. Yet only 18% of employees reported the existence of such policies in their workplaces.
“That’s a huge gap,” he said. “This shows there is a clear disconnect between the leadership and the employees, and this could be contributing to the risky behaviour.”
While generative AI can undoubtedly lead to efficiencies and a culture of innovation, the implications of entering private company data into public AI systems can be significant.
“If this information becomes part of the tool’s algorithm, it could be used to generate responses for other users outside of the organization,” Demetriou said. “Cybercriminals do target these tools and personal data.”
Beyond the immediate risks, there are long-term implications and the chance of “operational disruption,” he said. Compliance violations can result in fines and lawsuits, while breaches erode trust with clients and employees. Reputational damage can lead to lost business opportunities and talent attrition, further compounding the impact.
“Trust takes time to rebuild. It’s crucial for customer relationships and a positive workplace culture,” Demetriou said. “Addressing breaches requires resources, from investigations to implementing new security measures, which can decrease productivity and increase operational costs.”
So, what steps should organizations take? Organizations need to establish a trusted AI framework, which encompasses policies, processes, and controls, and consider developing private generative AI systems. While proprietary systems require time, money, and resources, they provide greater control over sensitive information.
“Before the policies, you have to put in place a strategy for AI,” he said. “Then the policy should make it clear how the use of generative AI complies with privacy laws, client agreements, and professional standards,” he said.
The policies should also address the aftermath of data misuse and ensure that AI-generated responses are used judiciously, as using inaccurate information for business decisions could put the organization at risk Demetriou said.