'Employers may want to consider reviewing the extent to which they are using such tools'
New York City’s new law regulating employers’ use of artificial intelligence (AI) in making hiring decisions has taken effect.
With the city’s law regulating the use of automated employment decision tools (AEDTs) on July 5, employers that use AI tools to make hiring decisions must now disclose this fact to candidates.
The law also requires employers to conduct annual third-party "bias audits" of the technology or software they use, to make public the ways in which the AI could be discriminating against certain types of candidates, according to a CBS News report.
"If in fact the employers are using an automated employment decision tool (AEDT), then the employer has to commission an independent audit, publish a summary, tell applicants and employees they're using it, and give applicants the opportunity to have an accommodation and pursue an alternative selection process," Domenique Camacho Moran, an employment attorney at Farrell Fritz, told CBS MoneyWatch, according to the report.
"We are only talking about those tools that take the place of human people making decisions."
Employers who violate the law concerning the use of AI in hiring decisions will be fined $500 for the first violation, and $1,500 for subsequent violations.
A recent survey by IBM among 3,000 CEOs from over 30 countries revealed that 46% have hired more employees due to generative AI, while 26% are planning to do so in the next 12 months.
Law on AI focuses on risks
John Hausknecht, a professor of human resources at Cornell University's school of Industrial and Labor Relations, explained that the new law is focusing on the parts of AI that humans can’t even explain.
"That's the risk in all of this, that left unchecked, humans sometimes can't even explain what data points the algorithm is picking up on. That's what was largely behind this legislation,” he told CBS News. "It's saying, ‘Let's track it, collect data, analyze it and report it, so over time, we can make changes to the regulations.’"
Under the new law, AI screening disclosure "must include instructions for how an individual can request an alternative selection process or a reasonable accommodation under other laws, if available.” Employers are not required to actually use a different screening process.
A previous survey from ResumeBuilder revealed that 91% of hiring employers are looking for workers with experience on ChatGPT, an AI chatbot that rose to fame after its release in November 2022.
‘Adds one more cost to process of hiring and promoting’
The law, however, could add to the stress of small employers, said Littler Mendelson employment attorney Niloy Ray on CBS MoneyWatch.
"This requirement just adds one more cost to the process of hiring and promoting within New York City and it is a cumbersome one. So it creates a certain amount of risk of somehow not complying because it wasn't crystal clear what you needed to do to comply and certainly there's the cost of compliance."
Meanwhile, Emily Dickens, Society of Human Resource Management chief of staff and head of public affairs, objects to the fines.
"It was a good faith attempt to try to assign some regulatory guardrails around the issue that could impact some people adversely if it's not used correctly," she said. "But we should assume good intent until we see something very egregious. It's the first law of its kind and is likely to be replicated in other jurisdictions and you don't want to start with penalizing people for trying to do the right thing."
Simone Francis, shareholder, and Zachary Zagger, senior marketing counsel, both at law firm Ogletree Deakins said that, with the new NYC law coming into effect, employers may want to look into their use of AI in hiring.
“Employers are increasingly relying on automated decision-making tools and AI systems to make employment-related decisions, including hiring, screening job candidates, or improving workplace efficiency. New York City is one of several jurisdictions to place guardrails on the use of this emerging technology as federal regulators, such as the EEOC, have further raised concerns that the tools could result in discrimination against individuals with disabilities or other protected groups,” they said in a piece published on JD Supra.
“Employers and employment agencies in New York may want to consider reviewing the extent to which they are already using such tools and whether such tools used or planned to be used have been subjected to bias audits.”
AI pioneer Geoffrey Hinton, the former Google executive dubbed as “the godfather of AI”, continues to raise concern about the bad side of AI – saying that advancements around AI technology are pushing the world into “a period of huge uncertainty” at a recent event.