What Is AI Discrimination in Recruitment and Hiring?

More and more employers are relying on artificial intelligence (AI) to find and hire workers. But does AI reduce bias in the hiring process—or amplify it?

By , Attorney · UCLA School of Law

Employers are increasingly using artificial intelligence (AI) to screen job candidates and make hiring decisions. More than 80% of all employers—and 99% of Fortune 500 companies—use some form of automated tool in their hiring process, according to the Equal Employment Opportunity Commission (EEOC).

This approach has its advantages: AI can streamline hiring, sifting through thousands of resumes or evaluating hundreds of video interviews in a fraction of the time it would take for hiring personnel to do the same tasks.

And some argue that using AI in hiring can eliminate bias by using a blind evaluation system that focuses on job skills while filtering out identity markers such as name, age, and gender that can lead to discrimination. It is illegal for employers to discriminate against job applicants or employees on the basis of race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, or genetic information.

But AI is no less prone to unconscious bias than its human developers—and it might even be more likely than hiring personnel to introduce discrimination into the hiring process. As a result, job candidates from underrepresented groups might face additional barriers to hiring. AI can also expose both the companies that create hiring software and the employers who use it to liability for violating laws against discrimination.

Let's take a closer look at how AI is used in the hiring process, how it can introduce or perpetuate discrimination in hiring, and the current laws and other guidance governing AI discrimination in hiring.

How AI Is Used in the Hiring Process

Employers may rely on AI tools to assist them at every stage of the hiring process, from advertising job openings to making job offers. Examples of these AI tools include:

  • predictive technologies that advertise job openings to candidates most likely to be a good fit, or that identify potential candidates for recruiters to target
  • resume and cover letter scanners that look for desirable keywords
  • "chatbots" or conversational virtual assistants that screen out applicants who don't meet certain requirements
  • software that evaluates candidates' facial expressions and speech patterns during video interviews
  • testing software that scores applicants based on characteristics such as personality, aptitude, and culture fit; and
  • programs that help employers make a job offer that an applicant is likely to accept.

How AI Can Introduce or Perpetuate Discrimination in Hiring

Because AI algorithms are trained using historical data, they can perpetuate any biases harbored in that data. For example, Amazon discontinued its automated candidate screening program in 2018 because the system—trained on 10 years of resumes submitted mostly by men—had learned to filter out female candidates.

In addition, AI algorithms may introduce bias by considering variables that seem harmless on their face, but which the system may learn to use as proxies for protected characteristics such as race or gender. For example, AI systems may screen applicants based on their zip codes, which could lead to discrimination based on race.

Other examples of bias in AI hiring systems include the following:

  • Even when employers set their advertising parameters for job openings to be highly inclusive, the Facebook algorithms used to determine who would find certain job postings "relevant" automatically displayed supermarket cashier positions to an audience of 85% women, and showed jobs with taxi companies to an audience that was approximately 75% Black.
  • Two facial recognition programs were found to have interpreted Black job applicants as having more negative emotions than white applicants.
  • Chatbots or programs that evaluate video interviews may be trained using native English speakers, so they may score job applicants lower or screen them out if they speak English as a second language, they don't use standard English grammar, or they have a speech impediment.
  • AI testing systems that require the use of a keyboard, trackpad, or other manual input device may automatically turn down job applicants who have limited manual dexterity due to a disability.

The problem of AI discrimination is compounded by the fact that most vendors of AI hiring systems don't disclose how their algorithms work. It can be difficult to prevent or even identify AI discrimination when the inner workings of the systems are invisible to both job candidates and employers. Employers may rely on AI systems without questioning how they work or monitoring patterns in their hiring decisions.

Experts and government agencies have warned that AI discrimination could become entrenched unless developers and employers take measures such as testing, oversight, and auditing to ensure equity in the design and use of AI hiring systems.

Laws and Other Guidance on AI Discrimination in Hiring

There are currently no federal or state laws addressing AI discrimination in recruitment and hiring, although various states are considering laws that address the issue.

As of 2023, New York City is the only municipality that has passed a law governing AI bias in hiring. The law makes it illegal for employers to use an automated employment decision tool unless they first conduct a "bias audit" of the AI tool and make the results of the audit public. The law also requires employers to notify employees and job candidates if they're using AI tools.

The federal government has also sounded an alarm regarding potential discrimination in the use of AI tools for recruitment and hiring. In 2022, the U.S. Justice Department and the Equal Employment Opportunity Commission (EEOC) jointly warned employers about using AI tools that could compound the discrimination already faced by job seekers with disabilities.

The White House has also issued a Blueprint for an AI Bill of Rights, which notes the potential harm that can be caused by biased or discriminatory AI algorithms and warns of AI's potential to violate existing laws against employment discrimination.

The Blueprint advises AI designers, developers, and deployers to carefully test AI software before and after implementation to ensure that it doesn't discriminate based on race, gender, disability, or any other protected characteristic. The Blueprint also recommends ongoing audits and human oversight of AI systems.

Despite the absence of laws specifically addressing AI discrimination, the federal government has made clear that employers using AI tools are obligated to comply with existing laws governing employment discrimination.

Reliance on AI hiring software is not a defense to a discrimination lawsuit: Employers are responsible for ensuring that the hiring tools they use aren't adversely impacting a protected group of job applicants.

Contact an Employment Lawyer

Both AI and the laws regulating it are evolving rapidly in all areas, including in job recruitment and hiring. An experienced employment lawyer can help you determine how new and existing laws apply to your unique situation.

Contact an employment lawyer if you believe you've experienced AI discrimination in the hiring process, or if you're an employer who wants to reduce your risk of violating anti-discrimination laws in your use of AI hiring tools.

Talk to a Lawyer

Need a lawyer? Start here.

How it Works

  1. Briefly tell us about your case
  2. Provide your contact information
  3. Choose attorneys to contact you
Get Professional Help

Talk to a Wrongful Termination attorney.

How It Works

  1. Briefly tell us about your case
  2. Provide your contact information
  3. Choose attorneys to contact you