Employers are increasingly using artificial intelligence (AI) to screen job candidates and make hiring decisions. More than 80% of all employers—and 99% of Fortune 500 companies—use some form of automated tool in their hiring process, according to the Equal Employment Opportunity Commission (EEOC).
This approach has its advantages: AI can streamline hiring, sifting through thousands of resumes or evaluating hundreds of video interviews in a fraction of the time it would take for hiring personnel to do the same tasks.
And some argue that using AI in hiring can eliminate bias by using a blind evaluation system that focuses on job skills while filtering out identity markers such as name, age, and gender that can lead to discrimination. It is illegal for employers to discriminate against job applicants or employees on the basis of race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, or genetic information.
But AI is no less prone to unconscious bias than its human developers—and it might even be more likely than hiring personnel to introduce discrimination into the hiring process. As a result, job candidates from underrepresented groups might face additional barriers to hiring. AI can also expose both the companies that create hiring software and the employers who use it to liability for violating laws against discrimination.
Let's take a closer look at how AI is used in the hiring process, how it can introduce or perpetuate discrimination in hiring, and the current laws and other guidance governing AI discrimination in hiring.
Employers may rely on AI tools to assist them at every stage of the hiring process, from advertising job openings to making job offers. Examples of these AI tools include:
Because AI algorithms are trained using historical data, they can perpetuate any biases harbored in that data. For example, Amazon discontinued its automated candidate screening program in 2018 because the system—trained on 10 years of resumes submitted mostly by men—had learned to filter out female candidates.
In addition, AI algorithms may introduce bias by considering variables that seem harmless on their face, but which the system may learn to use as proxies for protected characteristics such as race or gender. For example, AI systems may screen applicants based on their zip codes, which could lead to discrimination based on race.
Other examples of bias in AI hiring systems include the following:
The problem of AI discrimination is compounded by the fact that most vendors of AI hiring systems don't disclose how their algorithms work. It can be difficult to prevent or even identify AI discrimination when the inner workings of the systems are invisible to both job candidates and employers. Employers may rely on AI systems without questioning how they work or monitoring patterns in their hiring decisions.
Experts and government agencies have warned that AI discrimination could become entrenched unless developers and employers take measures such as testing, oversight, and auditing to ensure equity in the design and use of AI hiring systems.
There are currently no federal or state laws addressing AI discrimination in recruitment and hiring, although various states are considering laws that address the issue.
As of 2023, New York City is the only municipality that has passed a law governing AI bias in hiring. The law makes it illegal for employers to use an automated employment decision tool unless they first conduct a "bias audit" of the AI tool and make the results of the audit public. The law also requires employers to notify employees and job candidates if they're using AI tools.
The federal government has also sounded an alarm regarding potential discrimination in the use of AI tools for recruitment and hiring. In 2022, the U.S. Justice Department and the Equal Employment Opportunity Commission (EEOC) jointly warned employers about using AI tools that could compound the discrimination already faced by job seekers with disabilities.
The White House has also issued a Blueprint for an AI Bill of Rights, which notes the potential harm that can be caused by biased or discriminatory AI algorithms and warns of AI's potential to violate existing laws against employment discrimination.
The Blueprint advises AI designers, developers, and deployers to carefully test AI software before and after implementation to ensure that it doesn't discriminate based on race, gender, disability, or any other protected characteristic. The Blueprint also recommends ongoing audits and human oversight of AI systems.
Despite the absence of laws specifically addressing AI discrimination, the federal government has made clear that employers using AI tools are obligated to comply with existing laws governing employment discrimination.
Reliance on AI hiring software is not a defense to a discrimination lawsuit: Employers are responsible for ensuring that the hiring tools they use aren't adversely impacting a protected group of job applicants.
Both AI and the laws regulating it are evolving rapidly in all areas, including in job recruitment and hiring. An experienced employment lawyer can help you determine how new and existing laws apply to your unique situation.
Contact an employment lawyer if you believe you've experienced AI discrimination in the hiring process, or if you're an employer who wants to reduce your risk of violating anti-discrimination laws in your use of AI hiring tools.
Need a lawyer? Start here.