Artificial Intelligence, Defamation, and Libel: Is Anyone Liable?

AI software probably can’t defame you (yet), legally speaking, but you might have options if you were harmed by AI-produced content.

By , Attorney · University of Missouri–Kansas City School of Law

Brian Hood, the mayor of a small Australian town, claims he was defamed by ChatGPT, the artificial intelligence (AI) software developed by OpenAI. Hood has threatened to sue OpenAI for defamation, a lawsuit that would be the first of its kind. Hood's case, and others like it that are sure to follow, raises a variety of difficult legal issues.

For example, can AI software (sometimes called a "chatbot") defame you? Based on current defamation law, probably not. But if false and harmful AI-generated information about you has been published, you might be able to hold someone responsible.

Current Defamation Law

Defamation includes both spoken defamatory statements (slander) and written defamatory statements (libel). To win a defamation case, the plaintiff (the party who claims to have been defamed) typically must prove these elements:

  • the defendant (the party being sued) published a statement of fact about the plaintiff
  • the statement was false
  • the false statement caused the plaintiff some harm, and
  • the statement wasn't privileged or protected by law.

A defamation plaintiff who isn't a "public figure" (discussed below) must also show that the defendant negligently made the false statement. Negligence is just a legal term for carelessness.

If the plaintiff is a well-known public figure like a celebrity, a famous athlete, or a politician, negligence isn't enough. A public figure must prove that the defendant made the false statement of fact with "actual malice." "Actual malice" means the defendant knew the statement was false, or made the statement with reckless disregard for its truth or falsity.

Why an AI Chatbot Can't Defame You—Yet

For starters, note that the law hasn't decided whether AI-generated content should even qualify as speech protected by the First Amendment. Does AI software really "speak" like a human? Or does it just cobble together human-produced words and phrases into something that looks or sounds like speech? It will take some time for courts and lawmakers to answer these questions.

To keep our discussion simple, let's assume that AI software can "speak" in the eyes of the law. If AI software can speak, it can publish a false statement of fact that isn't privileged and that causes a person some harm. So isn't that defamation?

Not so fast.

Remember that a person who claims to have been defamed needs to prove the chatbot spoke with actual malice (in the case of a public figure plaintiff) or negligently (in the case of a private figure plaintiff). Proving either one of these elements will almost certainly be a problem.

AI and Actual Malice

Even the most sophisticated present-day AI software lacks the capacity to "know" or to act "recklessly" in the same sense that a human can. In other words, today's AI can't publish a statement either with knowledge that it was false or with reckless disregard for its truth or falsity, and that's the legal bar that a public figure must clear when proving their defamation case.

AI and Negligence

Negligence is easier to prove than actual malice, but establishing it still poses problems when AI is doing the talking. Can AI software be negligent? Can it fail to act with reasonable care? Do we compare the AI software's actions to those of a reasonably careful person, or to a reasonably careful AI tool? As you can see, adapting even a well-worn legal concept like negligence to AI poses sticky problems.

If a Chatbot Can't Defame Me, Does That Mean It's Off the Hook?

For now, probably so. But be patient. The law takes time to adapt to new and emerging technologies. The day might come, perhaps sooner rather than later, when we hold AI tools like chatbots directly accountable for the harms they cause.

Who's Legally Responsible for Chatbot Content?

Let's divide the universe of those who might be liable for AI-generated content into three groups:

  • those who create AI software (like OpenAI)
  • those who host AI-created content (like search engines), and
  • those who use AI software to create content (like journalists and news networks).

Here's a quick review of the defamation elements for each potentially liable party.

Those Who Create AI Software

Defamation liability for AI software makers seems like a stretch. An AI tool like a chatbot is just a sophisticated set of machine rules and instructions. Using those rules and instructions, AI software reviews enormous data sets to "learn" language, then compiles responses to specific queries in formats that mimic human speech or writing.

It's possible, of course, that an AI software maker could deliberately or negligently instruct its software to produce false and defamatory content. Were that to happen, it would make sense to hold the maker legally responsible for any resulting harm.

But in most cases, the software creator won't tell the software what to say, or what specific sources to use in compiling any particular response. The software maker doesn't publish any false factual statements, nor does it typically have the state of mind needed to defame anyone.

Search Engines

Search engines are, in the simplest terms, content aggregators that also rank results based on relevance and other factors. They use rules and instructions written by programmers to carry out these functions based on user inputs. In that respect, they're much like AI chatbots themselves.

In countries where it's easier to prove defamation, search engines have found themselves in legal hot water over search results produced by "autocomplete"—where you begin typing a query and the search engine suggests additional words and phrases. Here in the United States, we've chosen a different approach.

Specifically, in most cases, search engine liability is extinguished by Section 230 of the Communications Decency Act, a federal law. Under Section 230, "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." As long as search engines and similar platforms don't cross the line from content host to "information content provider," they're generally safe.

So, under existing federal law, search engines have a strong defense to defamation liability. That defense might change if courts or national lawmakers decide Section 230 shouldn't protect machine-generated content, but such a change might create more problems than it solves.

Those Who Use AI Software to Produce Content

AI software users who use the software to create content and then publish the resulting output—an increasingly common news practice—are more likely targets for defamation liability. Typically, someone uses AI software by querying it. For example, a query might say "Has [your name] ever been convicted of a crime?" or "Tell me about Jennifer Garner's movie career."

The software responds based on what it's "learned" about the subject, using rules and instructions provided by the software maker. The AI software, in other words, creates content based on user inputs.

More importantly, people who use AI software to create content can review (or should be held responsible for reviewing) the speech the AI produces. Those people are capable of having the state of mind to be liable for defamation. In other words, they're capable of negligently or maliciously publishing false factual content. It shouldn't matter who or what creates the content as long as that content is published by a person or entity that can be held liable for defamation.

Should You Use AI Software to Create and Publish Content?

If you use AI software like ChatGPT to produce content that you then publish, say on a social media platform, should you be concerned about defamation liability? Possibly so.

Because the law is unsettled, you could be held responsible for anything the software writes that you publish. The law likely won't distinguish between content you write and content your AI software writes, particularly if you publish the work under your own name or identity.

Note, too, that if you publish AI-generated content on a social media platform, odds are you won't be able to take advantage of Section 230 immunity. Under that law, you'd likely be considered an "information content provider," which means you're not shielded.

What can you do to protect yourself? For starters, you should carefully review any content created by your AI software for truthfulness and accuracy. Truth, as the saying goes, is a defense to defamation liability.

If your AI software has created content that might be defamatory, or if you're unsure, don't publish it until you've been advised by an attorney with expertise in defamation law.

What to Do If You Think You've Been Defamed by a Chatbot

The law changes in response to human needs. As technology creates new ways for people to interact, work, and communicate, laws must evolve to keep pace. But legal change doesn't happen overnight. Courts and legislatures need to be presented with concrete facts—actual cases involving real harm to real people and businesses—so they can understand the nature of the problem and devise workable solutions.

If you've been harmed by AI-generated content, talking to an experienced defamation lawyer is a good first step. It may be that you have a case that doesn't fit neatly within the confines of existing defamation law, or that another type of legal claim might be a better fit.

Lawyers are allowed to pursue claims that make a good faith attempt to extend or modify the law. This sort of legal boundary testing is precisely what must happen in order to bring defamation laws in line with rapidly changing technology like AI and chatbot-generated content.

Make the Most of Your Claim
Get the compensation you deserve.
We've helped 285 clients find attorneys today.
There was a problem with the submission. Please refresh the page and try again
Full Name is required
Email is required
Please enter a valid Email
Phone Number is required
Please enter a valid Phone Number
Zip Code is required
Please add a valid Zip Code
Please enter a valid Case Description
Description is required

How It Works

  1. Briefly tell us about your case
  2. Provide your contact information
  3. Choose attorneys to contact you