Many workers entering information about their employer into prompts, including HR data, private financial data
Workers may be putting their employers at risk by using generative artificial intelligence (AI) tools, finds a new report from KPMG.
One in five (20%) workers have used generative AI to help them do their jobs. The top uses of this technology include:
- generating ideas (49%)
- research (48%)
- creating presentations (30%)
- summarizing or analyzing information from public resources (20%)
Just over half of users say generative AI tools save them up to five hours per week, and 67% say the time saved by using generative AI tools has allowed them to take on additional work that they otherwise would not have had the capacity to take on.
Two-thirds (65%) say using generative AI is essential to address their workloads.
Between January and February, the use of OpenAI’s generative artificial intelligence (AI) technology ChatGPT soared by 120% globally, according to a report from DeskTime, a provider of workforce management solutions.
Risks of using generative AI
However, among generative AI users, nearly one-quarter (23%) say they are entering information about their employer (including its name) into prompts, finds the KPMG in Canada Generative AI Adoption Index, based on a survey of over 5,100 Canadians, conducted from May 17-29, 2023.
Some are even putting private financial data (10%) or other proprietary information such as human resources or supply chain data (15%) into their prompts.
When it comes to checking the accuracy of content generated by AI platforms, just under half (49%) of users say they check every time, while 46% check sometimes.
Generative AI platforms have been known to produce content that's misleading or inaccurate, often known as "hallucinations,” KPMG notes.
"Data is an organization's most valuable asset, and it needs to be safeguarded to prevent data leaks, privacy violations, and cybersecurity breaches, not to mention financial and reputational damage," says Zoe Willis, national leader in data, digital and analytics and partner in KPMG in Canada's generative AI practice.
"Organizations might need to look at creating proprietary generative AI models with safeguarded access to their own data – that's a critical step to reducing the risk of sensitive data leaking out into the world and getting into the hands of potentially malicious actors."
While many employers are clamping down on ChatGPT use in the workplace, around 9,000 employees at Japan-based firm Daiwa Securities Group have been given the greenlight to utilise the AI chatbot, according to a previous report.
“It's absolutely critical for organizations to have clear processes and controls to prevent employees from entering sensitive information into generative AI prompts or relying on false or misleading material generated by AI," says Willis. "This starts with clearly defined policies that educate your employees on the use of these tools. Organizational guardrails are essential to ensure compliance with privacy laws, client agreements and professional standards.”
In New Zealand, the Privacy Commissioner has released guidance on his office's expectations regarding the use of generative artificial intelligence (AI) by businesses that are subject to the Privacy Act 2020 (Privacy Act). While it's acknowledged that the pace at which generative AI is changing and developing means that the guidance will need to be subject to ongoing review and amendment, it provides helpful practical advice to New Zealand businesses regarding key data protection issues to consider when using generative AI.