How Does the Law Regulate AI?

An overview of how laws in different areas of life—including privacy, housing, employment, and law enforcement—do or don’t deal with the issue of artificial intelligence.

By , Attorney · University of North Carolina School of Law

Artificial intelligence is permeating everyday life. Businesses are using AI tools to collect and deliver personal information. Employers are using AI to screen job applicants. Students are using it to support (if not actually do) their schoolwork. Companies are using and even trying to copyright AI-generated content.

All the uses of artificial intelligence beg the question: What laws regulate AI? Are there any legal limits on this technology?

AI Uses—and Abuses?

"Artificial intelligence" is a broad term used for computer programs (or machines) that are trained to perform tasks that normally require human intelligence. AI models range in complexity. Some can perform relatively simple tasks like data entry. Others can carry on real-time conversations with humans.

The most popular complex AI models are machine learning models, which, through their training, learn and improve over time. Machine learning models include language learning models (LLMs) like ChatGPT, Bard, and LLaMA.

Businesses across industries are using these and other AI tools for a variety of purposes. The way that companies and people are using AI presents questions and concerns about a number of issues, including:

In July 2023, the Federal Trade Commission (FTC), the independent federal agency charged with consumer protection, launched an investigation into OpenAI (owner of ChatGPT). FTC Chair Lina Khan said, "Although these [AI] tools are novel, they are not exempt from existing rules."

So, what are the existing rules? And are new ones coming?

Laws Regulating AI on Data Privacy

One of the primary concerns around AI tools has been data privacy. The public and lawmakers have questions about how LLMs like ChatGPT, Bard, and LLaMa access, process, and distribute personal information. (Personal information is any piece of data that can identify someone, such as a person's name, address, Social Security number, or financial information.)

Generally, the U.S. protects consumer data privacy through the FTC. The FTC is mainly charged with protecting consumers against unfair or deceptive business practices. The FTC's investigation of OpenAI and ChatGPT can set the framework for limits on how companies gather, use, and share consumer data.

In a move to provide more protections for U.S. consumers, The House Energy and Commerce Committee introduced a comprehensive data privacy bill called the American Data Privacy Act. But as of August 2023, the House and Senate haven't voted on the bill.

While federal consumer protection laws can be vague, some states have created their own laws to protect residents. States like California and Virginia have passed stricter data protection laws that are similar to the European Union's General Data Protection Regulation (GDPR). These laws give consumers a say in how businesses collect, use, and share their information.

Read more about how data privacy laws are adapting to AI.

Laws on AI-Created Employer Bias

A growing number of employers are using AI to screen job applicants. Employers can use AI tools to:

  • scan resumes
  • predict job performance
  • target specific applicants, and
  • assess video interviews.

These tools are used to streamline the hiring process and better spot the candidate who best fits the job position.

Employers often turn to AI to eliminate human bias in the recruitment or hiring process. Federal law prohibits employers from discriminating against job applicants (and employees) based on characteristics that include race, color, sex, religion, age, disability, national origin, and genetic information (like your medical history). State laws protect employees from more kinds of discrimination—for instance, bias having to do with marital status or weight. Using AI to hire employees is supposed to lead to objective, discrimination-free hiring decisions that comply with the law.

But even if an employer uses AI to recruit, interview, and present a job offer to an applicant, humans aren't eliminated from the equation. Humans build the AI models, and these models are often trained on biased information, resulting in biased AI. Even AI trained on well-intentioned data is vulnerable to discrimination against job applicants. For example, AI tools have been found to favor resumes written by men over women and disfavor resumes submitted by people with distinctively Black names.

As of 2023, U.S. cities and states haven't passed many laws to combat AI bias in hiring. New York City is the only city that's passed a law regulating the use of AI in hiring. And, while a handful of states are considering regulations, only Maryland and Illinois have enacted laws targeted at employers using AI. The federal government hasn't passed any law addressing this issue.

To learn more, read about how AI can discriminate in recruitment and hiring.

Laws on AI-Created Housing Discrimination

Landlords are increasingly using tenant screening services to clear or disqualify housing applicants. Typically, these services issue reports—commonly called "background checks"— based on the applicant's:

The screening service is usually an AI tool that mines information from databases to produce a comprehensive report. The report often includes a score or a "yes" or "no" recommendation for the applicant.

But these reports can provide inaccurate results. For example, a service might provide information for Vincent C. Vega when the applicant is actually Vincent T. Vega.

Moreover, the algorithms used by these screening services can unfairly disadvantage protected groups of people. These AI screening tools often produce background check results that give little context about an applicant's criminal cases and are influenced by discriminatory police practices that lead to Black people having more interactions with police than white people.

Under the federal Fair Housing Act (FHA), landlords—as well as sellers, lenders, and others—can't discriminate against someone trying to obtain housing based on that person's protected characteristics, such as race, national origin, and disability. (42 U.S.C. § 3604 (2023).) Because the algorithms used by these screening services generate reports that disadvantage protected groups, the use of these algorithms might violate the FHA and other fair housing laws.

Fifteen attorneys general shared this concern about biased AI algorithms in May 2023. A group of attorneys general wrote a letter to the Consumer Financial Protection Bureau (CFPB) and the FTC, the agencies charged with regulating these screening methods, urging them to act. The attorneys general call for banning these kinds of algorithms because they're based on inaccurate, biased, and unverifiable data and deny housing to protected groups.

As of August 2023, the CFPB and FTC have yet to act.

A legal area that has seen a lot of lawsuits over AI is intellectual property, specifically copyright law. The two main questions here have to do with claiming copyright and using copyrighted material.

Can you copyright work created by AI? The U.S. Copyright Office has said that works created by AI can't be copyrighted because copyright requires "human authorship." So, if a creator were to use an AI tool to produce a work they wish to copyright, their use of the AI tool would need to be limited. In other words, the AI tool would need to be simply a helper. The Copyright Office says that how much an AI tool can help a creator without eliminating copyright protection will be decided on a case-by-case basis.

Can you use copyrighted works to train AI? As of 2023, some copyright owners are suing AI developers on this issue. When training AI models—specifically models like ChatGPT and LLaMA—developers use large amounts of text from websites and books. Copyright holders argue that AI developers are using their copyrighted material to train the AI without permission and without compensation. Unless the federal government intervenes, the courts will decide whether AI will be allowed to use training data indiscriminately.

For more on these intellectual property issues, read about how copyright law treats AI.

Law Enforcement Use of AI

"Police surveillance" is a scary phrase these days. The police use AI tools in ways that make many people uncomfortable. For example, law enforcement uses AI facial recognition software to compare images of suspects to image databases in order to match the faces to names. The image databases are filled with images of people who don't know they're part of the databases.

In theory, AI tools should give objectionable, data-driven results. But there are two main problems with current AI use in law enforcement:

  • the AI is trained on biased data, and
  • there's limited supervision, training, and accountability.

Police practices and data (like arrest and conviction statistics) have been shown to be biased. Yet this biased data is being used to train AI to predict where crime is more likely to happen and who's most likely to commit that crime.

For example, suppose a mostly African American neighborhood was more heavily policed in the past 10 years due to racial bias, and that that policing resulted in a disproportionate number of arrests. If that higher arrest rate is fed into an AI tool used to predict places where crimes are more likely to happen, then that neighborhood would probably be unfairly identified as a higher-crime area. If a police force follows the AI's biased prediction, officers will patrol that neighborhood more. In this sort of situation, AI perpetuates existing bias.

You're protected from some surveillance by the Fourth Amendment. Depending on the level of surveillance, the police could need a warrant or probable cause. In many instances, though, they need no justification at all to keep tabs on people. That was one thing before the days of computers; it's entirely another in the era of machine learning.

As of 2023, there are very few and limited regulations on law enforcement using AI. Instead, police use AI in their everyday practices with little oversight and few rules.

The Future of AI Law

Outside the U.S., the European Union (EU) has proposed an AI Act to regulate artificial intelligence in various areas of law, including data privacy, employment, education, and law enforcement.

The proposed Act would categorize AI systems according to their level of risk—ranging from "unacceptable" to "low or minimal." The AI system's level of risk would lead to corresponding oversight. EU legislators are still negotiating the AI Act. It hasn't yet been finalized or adopted as of August 2023.

In the U.S., regulatory response, across the board, is far behind quickly evolving AI technologies.

In June 2023, Senator Chuck Schumer announced his SAFE Innovation framework, which stands for "security, accountability, foundations, and explainability." Senator Schumer's framework is a call to action for Congress to pass laws addressing many aspects of AI, including job security, privacy, intellectual property, and civil liberties.

State laws could curb some misuse of AI. But, in the short term, the outcomes of court cases could provide clearer standards about how AI can and can't be deployed in various industries. Federal agencies could enforce existing laws so that they extend to AI, but regulations specifically designed for AI are few and far between.

Talk to a Lawyer

Need a lawyer? Start here.

How it Works

  1. Briefly tell us about your case
  2. Provide your contact information
  3. Choose attorneys to contact you
Get Professional Help

Talk to a Consumer Protection attorney.

How It Works

  1. Briefly tell us about your case
  2. Provide your contact information
  3. Choose attorneys to contact you