'It's very difficult to detect these things accurately, and you can get actually in quite a bit of trouble'
Leading AI developer Anthropic has banned the use of AI in job applications – should other employers do the same? HRD Canada spoke with experts for answers.
“If people were going to lie on their resumes, people were always going to lie on their resumes. It's just a different route to doing that,” says Matthew Guzdial, computing professor and CIFAR AI chair at the Alberta Machine Intelligence Institute (Amii) at the University of Alberta.
Anthropic’s “AI Policy for Application” appears to be an across-the-board addition to most if not all job postings, from Research Engineer / Research Scientist to Facilities Coordinator to Product Designer.
The policy also appears to be posted in all geographic markets.
“While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process,” the policy states.
“We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.”
The policy is followed by a drop-down menu where applicants can select “yes” or “no”.
However, Greg Buechler, California-based tech talent acquisition executive, says it would be very difficult for even Anthropic to identify who had used the tech and who hadn’t.
“Your true style is somewhat diminished when a tool does both the message generation and polishing,” he says. “With that said, it would be very difficult for them to identify candidates who know how to use Gen AI well.”
As the developer of the popular chatbot Claude, Anthropic banning AI use in job applications might seem like an obvious reason for employers to follow suit – but it’s not that simple, says Guzdial.
“While I think it's definitely an understandable thing to ban the use of large language models or AI for resumes or cover letters, it's not actually enforceable right now,” Guzdial says.
“There's a bunch of services that claim that they can detect generated AI text, but all of them have about a 12% accuracy. So, they're worse than chance, in terms of figuring out, did this person actually use a large language model, a service like ChatGPT or whatever, to be able to create this text?”
Because generative AI (genAI) detection tools are built with humangenerated text, he explains, they are prone to mistakes, such as reading text written by someone for whom English is a second language as AI generated.
“Because of that, it's really hard to do this detection problem, to be able to say confidently that ‘This text was generated by a large language model’ versus ‘This text was generated by a non-native English speaker,’ for example,” Guzdial says.
“Or ‘This text was generated by somebody who just made some mistakes in their wording,’ or things like that. So, it's very difficult to detect these things accurately, and so you can get actually in quite a bit of trouble.”
Instead of eliminating AI use, the focus instead should be reducing the amount of time wasted on applications that will never make it to the interview stage, says Guzdial.
“You want to limit the amount of flak, the amount of people that you're just never going to hire,” he says. For example, many applicants use AI to scan company websites and job postings to generate resumes and cover letters.
By inserting “invisible” text (that is the same colour as the background) on websites and job postings, and instructing the applicant’s AI tool to create something else – “’Mary Had a Little Lamb’ or whatever, just something arbitrary” – employers can dissuade applicants from continuing their application.
“If somebody's actually paying any attention to what's going on, they'll catch it, right? And so, you're not going to be able to catch the people who are working interactively with a large language model, using it as an editor, or something like that,” Guzdial explains.
“But there's a bunch of people who just automate this process … so it's a big waste of time, if you have to dig through a bunch of potentially just generated resumes that aren’t meaningful.”
The next level of screening applicants for AI use is verifying the facts on their documents, Guzdial says, since many applicants won’t do this step, either assuming they won’t be fact-checked or being over-reliant on technology to do the work.
“If you do a quick search on them, does it actually turn out they have the degrees, that they went to the schools that they said?” he says.
“Obviously these are things that are possible to fake as well, but now you're getting one layer of the people who are too lazy to do these things … because things like ChatGPT will love to say that ‘Oh, you have a degree from Harvard’ or whatever. They'll hallucinate all the time.”
Once at the interview phase, there are specific questions that can help recruiters hone in on potential employees’ AI use, Buechler says.
Questions such as “How do you use AI to assist in your day-to-day work?” “How does AI optimize your value?” and “How does AI optimize your time?” can help to accurately assess applicant AI knowledge, beyond what they might have used in the creation of their application.
As Guzdial explains, using technology to supplement job applications is not new – for HR professionals, it’s just a matter of refining the process of weeding those applicants out.
“AI is not making people lie on their resumes,” he says. “You can say you're going to ban AI, but ultimately, that's a bit like saying’ I'm going to ban people from lying on their resumes.’ It's just not possible to enforce until, they get to the interview stage.”