Digital Transformation

NYC Cracks Down on AI in the Hiring Process

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

The New York City Council has enacted a new bill that will go into effect in 2023 and has monumental implications for the use of AI in the hiring process. The bill — signed into law on Dec. 11, 2021 — will require businesses to audit any AI hiring tools to verify they are free of bias based on traits like race and gender. What does this law mean, and how will it affect AI and recruiting moving forward?

The Largest Recruiting AI Law to Date

New York City is the largest city to date to implement a bill of this nature. The goal is to address growing concerns about AI data bias by increasing employers’ accountability and transparency. The bill requires an audit of all “automated employment decision” tools and calls for employers to be upfront with job applicants about their use in the hiring process. Any violations will result in significant fines.

Businesses hiring within New York City will need to set up reviews with independent auditors at the employer’s expense. The bill further stipulates that companies must make the results of each annual audit publicly available and accessible to any interested applicants. Additionally, they are required to notify applicants before an AI hiring tool is used to evaluate their application and how it will be utilized.

Should AI Be Used in Hiring?

The new bill passed by the New York City Council comes in the wake of an increasingly complex debate about AI bias. On the one hand, businesses see AI hiring tools as a way to streamline the process. Some even thought early on that it could be a solution to recruiting bias since it takes the human factor out of the equation, at least in theory. However, it has become a critical issue, from recruiting to law enforcement.

A Problematic History

Amazon made headlines in 2018 when it halted the use of an AI hiring tool that showed a strong bias against female candidates. It eliminated resumes that had any mention of “women” or an all-female college or university. For example, candidates who were “captain of the women’s chess club” were getting disqualified because the AI had somehow determined that male applicants were preferable. Amazon has historically been a significantly male-dominated company, especially in STEM-related positions.

The issue stems from a lack of transparency within the technology itself. Although AI cannot possess actual emotional bias, its decisions can be biased if the training data that informs those decisions is, even subtly. Unfortunately, it is difficult to detect these issues until the AI is already in use. A prime example of this is AI facial recognition software in law enforcement, which has faced severe backlash for persistent racial prejudice and inaccuracies.

Balancing Businesses and Employees

At its core, AI in hiring is not intended to be harmful. Initially, businesses were hopeful that it could improve equality in recruiting while making the hiring process easier. There is more at stake here than convenience.

Countless businesses are struggling to keep up with hiring efforts, many of them in critical industries. For example, a staggering 84% of employers in the energy sector have reported difficulty finding qualified applicants, while a study in the healthcare field predicted a shortage of more than 400,000 home health aides by 2025. With ongoing labor shortages in similar industries, such as trucking and construction, many companies need some kind of efficient hiring solution. Theoretically, AI could help if the problem of data bias was adequately addressed.

This appears to be at the core of the New York City Council’s hiring AI bill. If an independent party can verify that an automated hiring tool is free of bias before it goes into use, maybe AI really could improve the process. If executed well, this bill could be a step toward bridging the needs of employers with the rights of job-seekers.

Programming Better

Although New York City’s new law definitely represents progress, there is still a long way to go in making AI equal to all in employment. Experts have pointed out that the NYC Council’s bill only addresses AI discrimination within hiring. It does not address issues in other facets of work, such as determining pay, scheduling and promotions.

Even beyond legislation, AI developers themselves need to be held accountable for the algorithms they create. Actions like New York City’s bill will help reinforce this and increase public awareness of AI bias. Some developers are beginning to work on new types of algorithms known as explainable AI, which are created to be more transparent from the start by opening up the black box. This allows developers to see how the AI is making decisions, including any bias indicators.

The Future of AI in Employment

AI bias at large will need to be solved for the technology to fulfill its potential in employment. New York City’s new bill is not the first piece of legislation to address this issue, and it will certainly not be the last. The future of AI in employment, and countless other fields, depends on enforcing transparency and accountability to build tech that is truly trustworthy.

Author

  • April Miller

    April Miller is a senior AI writer at ReHack Magazine with more than three years of experience in the field of deep learning. April particularly enjoys breaking down complex AI topics for consumers and business professionals with actionable tips on how to use emerging technologies.

Related Articles

Back to top button