In response to a query from a journalist, ChatGPT, a popular artificial intelligence (AI) model, said that Mark Walters, a Georgia-based radio host, had been sued by the Second Amendment Foundation for fraud and embezzlement. While he was the Foundation's treasurer, said ChatGPT, Walters had misappropriated funds, manipulated records to cover up his wrongdoing, and neglected to provide timely and accurate reports to Foundation leadership.
There was one big problem with ChatGPT's report: It was wrong, from start to finish. Every bit of it had been "hallucinated" by ChatGPT—made up, for reasons unknown, out of whole cloth. Walters had never been employed by, served as an officer of, or been sued by the Second Amendment Foundation. He'd never gotten a paycheck from the organization, much less diverted or stolen funds. And because he'd never wrongfully taken money from the group, there weren't any records to manipulate or reports to neglect.
Walters sued OpenAI, ChatGPT's creator, for defamation. As it turns out, this lawsuit was the first of its kind in the United States, alleging that the plaintiff (the party who sued) had been defamed by AI. The case raised novel legal issues—issues that undoubtedly will appear in lots of defamation claims to come.
After a brief overview of existing defamation law, we'll explore some of those issues, including:
Defamation comes in two basic forms: Libel and slander. Spoken defamatory statements are called slander. The law refers to written defamation as libel. There are variants on these forms, but they're not important for our discussion.
Defamation is a kind of tort, meaning wrongful conduct that causes harm to another. Each state makes its own tort law, so each state gets to decide on the elements of a defamation claim. But because defamation laws regulate speech, state laws are subject to limits imposed by the First Amendment free speech guarantee. In other words, if a state's defamation law is found to violate the First Amendment, the law is unconstitutional.
While there might be minor state-to-state variations, a defamation plaintiff typically must prove these elements:
The First Amendment dictates the controlling legal standard in a defamation case. Stated a bit differently, the First Amendment answers this question: How blameworthy must the defendant be to be held legally responsible ("liable") for harmful speech?
When the plaintiff is a well-known public figure like a celebrity, a famous athlete, or a politician, they must prove that the defendant made the false statement of fact with "actual malice." Actual malice means the defendant knew the statement was false, or they made the statement with reckless disregard for its truth or falsity. Because it requires proving the defendant's state of mind, the actual malice standard is tough to meet.
A defamation plaintiff who isn't a public figure need only show that the defendant made the false statement negligently. Negligence is a legal term for carelessness. Proving negligence is much easier than showing actual malice.
Importantly, keep in mind that the First Amendment only protects "persons." The Supreme Court has interpreted that to include both human beings and artificial persons like corporations. Here's why that matters in the context of AI speech.
In earlier cases deciding whether software outputs are protected speech, courts have asked whether those outputs embodied expression attributable to the software's authors or owners. The Court hasn't said that software-generated output is the software's own speech. Instead, the question is whether that output somehow represents speech that can be attributed to the software's creators or owners.
A special federal law protects internet content hosts like websites, search engines, and social media platforms. Section 230 of the Communications Decency Act of 1996 prohibits legal claims against content hosts for injuries resulting from third-party speech—speech by other people or companies—they host on their sites. (See 47 U.S.C. § 230(c) (2025).)
Suppose, for example, that Doe claims to have been injured because Roe said something harmful about Doe on Facebook. If Doe sues Facebook merely for hosting Roe's speech, Section 230 says Facebook can't be held legally responsible. For the past three decades, courts have interpreted Section 230 very broadly.
But there's an important limit on Section 230's protection: It only applies to third-party speech. Section 230 doesn't shield content hosts from legal responsibility for harms caused by their own speech, called "first-party speech." As we talk in the next section about whether AI models are able to speak, watch for the distinction between first-party and third-party speech.
It should come as no surprise that this question lies at the core of AI defamation law. If AI models don't "speak"—if they don't produce expression that's attributable to First Amendment persons as discussed above—then based on current law, the defamation inquiry comes to an abrupt end.
Three case decisions—including one from the United States Supreme Court—address this question. Unfortunately, the answer still isn't clear. The best we can say is: It appears that some AI models are capable of speaking, at least under some circumstances.
But where the First Amendment is concerned, context matters. That some AI models can speak under some circumstances doesn't mean that every AI output qualifies as speech. A look at the cases helps to explain why.
Florida and Texas passed laws limiting the ability of social media companies and others to regulate, edit, or remove third-party content on their platforms. In other words, Florida and Texas wanted to force companies like Facebook and YouTube to host content the companies preferred to exclude.
NetChoice, a trade group consisting of social media and similar companies, sued. It argued that the state laws impermissibly interfered with its members' protected speech, violating the First Amendment. The Supreme Court agreed.
Specifically, said the Court, editorial decisions like what content to allow, what content to exclude, and what content to regulate have long been understood—in the context of newspapers and other traditional media—to involve First Amendment speech. Forcing someone to speak by making them include content they don't agree with can change the message they want to deliver, interfering with their speech.
NetChoice didn't deal with defamation. Instead, the basic question was whether computer algorithms—software instructions—made editorial decisions attributable to the companies themselves when they censored or removed content. The Supreme Court's reasoning analogized to decisions involving traditional media to decide that the answer was "yes."
But editorial decision making is just a narrow slice of the First Amendment speech pie. It isn't clear that the Court's answer in this context can be broadened to apply to all speech questions, including claims of defamation.
Tawainna Anderson, the mother of 10-year-old Nylah Anderson, claimed that TikTok's software promoted the "Blackout Challenge," encouraging users to post videos of themselves engaged in acts of partial self-asphyxiation. Nylah decided to take the challenge and accidentally hanged herself. Ms. Anderson sued under state law. TikTok claimed it was shielded from liability by Section 230.
In a potentially groundbreaking decision, the federal court of appeals found that TikTok wasn't protected by Section 230. Why? Anderson argued that TikTok's algorithms made editorial decisions, much like the speech involved in Moody.
The court of appeals agreed. TikTok's algorithms decided what content to promote to users, including Nylah. Consistent with Moody, the appeals court found that TikTok wasn't simply hosting third-party speech. By deciding what content users should see, TikTok's algorithms were speaking for the company. First-party speech like this, the court said, isn't shielded by Section 230.
Here again, it's important not to over-generalize. Anderson can be seen as a "mirror image" of NetChoice. In NetChoice, the algorithms censored or removed content. Anderson was about algorithms that caused content to be promoted or fed to users, and whether that "speech" was protected by Section 230.
Anderson won't be the last word on AI speech and Section 230. TikTok chose not to ask the Supreme Court to review this case. But the same issue will appear again, sooner rather than later, giving the Supreme Court a chance to decide the issue once and for all.
Megan Garcia alleged that Sewell, her 14-year-old son, took his own life after becoming addicted to an AI large-language-model (LLM) chatbot hosted on Character.AI. Garcia said that "Dany"—one of the chatbot's fictional characters—led Sewell to believe they had a real and loving relationship. Among other things, Garcia claimed that the chatbot was a dangerous and defective product.
Defendants denied that the chatbot was a product. They asked the court to dismiss Garcia's claims, arguing that the chatbot's output was speech protected by the First Amendment. The federal district court disagreed.
The court didn't provide a definite answer to the speech question. Instead, Judge Conway said that at the motion to dismiss stage—very early in the lawsuit—she wasn't prepared to find that the LLM's responses to Sewell's inputs were protected expression attributable to the LLM's owner or creator. Judge Conway allowed Garcia to proceed with her defective product claims.
Based on the cases discussed above, it's possible that AI models can speak, at least for some First Amendment purposes. If computerized editorial decision making is speech, the argument goes, the same conclusion should follow when algorithms put words together (that is, they "speak") in ways that convey meaning to people.
But it's not clear that all AI "speech" is created equal. Depending on the facts, AI output might or might not be expressive, and thus protected under the First Amendment. Until the Supreme Court provides clear guidance, lower courts will have to struggle with these oft-messy issues.
Even if we assume that all AI outputs are speech, we don't automatically have the elements necessary for a defamation claim. Remember that a defamation plaintiff needs to prove actual malice (in the case of a public figure) or negligence (in the case of a non-public figure). The question is whether, based on outputs generated by an algorithm, courts can attribute the required state of mind to the algorithm's creators or owners. Depending on the facts, that's likely to be a tall order.
Let's divide the universe of potential liability for AI-generated content into three groups:
Here's a quick look at the defamation elements for each potentially liable party.
Defamation liability for AI model creators seems like a stretch. An AI model is just a huge set of machine rules and instructions. Using those rules and instructions, AI reviews enormous data sets to "learn" language, then compiles responses to specific queries in formats that mimic human speech or writing.
It's possible, of course, that an AI model maker could deliberately or negligently instruct the model to produce false and defamatory content. Were that to happen, it would make sense to hold them legally responsible for any resulting harm.
Short of that, though, the elements of a defamation claim are lacking. The model maker doesn't:
In addition, standard industry practice dictates that model makers include warnings and disclaimers, letting users know that AI models sometimes make mistakes or hallucinate, so user supervision is essential.
If all they do is host third-party content, search engines and content hosts have a seemingly impenetrable defense to liability in Section 230. Many defamation cases have died a quick death on the Section 230 hill.
But because some search engines and other platforms now use AI to curate, edit, and feed content to users, they run the risk of being speakers, not mere hosts. Anderson is a dark cloud hanging over their heads. While this decision doesn't apply nationwide—as would one from the Supreme Court—it sets a precedent that other courts can (and many probably will) choose to follow.
Of course, a defamation plaintiff still faces the same additional problems we discussed above. In particular, proving the that the owner of a search engine or content host acted with actual malice will be challenging. Proving simple negligence, though, won't be as heavy a lift.
People and companies that use AI to create and publish content—an increasingly common practice—are more likely targets for defamation liability. Typically, someone uses AI software by querying it. For example, a query might say "Has [name] ever been convicted of a crime?" or "Tell me about Jennifer Garner's movie career."
The algorithm responds based on what it's "learned" about the subject, using rules and instructions provided by the model maker, the content producer, or both.
More importantly, people who use AI to create content can review (or should be held responsible for reviewing) AI speech. Those people are capable of negligently or maliciously publishing false factual content. It shouldn't matter who or what creates the content as long as the ultimate responsibility for its truthfulness and accuracy rests with a person or company that can be held liable for defamation.
If you use AI to produce content that you publish, say, on a social media platform, should you be concerned about defamation liability? In a word: Yes.
Even without any changes to account for AI, the law is clear: You're responsible for any content you publish, whether alone or with others. The law likely won't distinguish between content you write and content AI writes for you. In other contexts, "I didn't know AI would do that!" hasn't been a successful defense. Lawyers, for example, have been punished for using AI to write legal briefs containing hallucinated legal authorities.
Note, too, that if you publish AI-generated content on a social media platform, you won't be able to take advantage of Section 230 immunity. Under that law, you're a speaker and not a content host, which means you're not shielded from suit.
What can you do to protect yourself? For starters, you should carefully review any AI-generated content for truthfulness and accuracy. If you're unsure whether AI content is defamatory, don't publish it until you've been advised by an attorney with expertise in defamation law.
The law changes in response to human needs. As technology creates new ways for people to interact, work, and communicate, laws must evolve to keep pace. But legal change doesn't happen overnight. Courts and legislatures need to be presented with concrete facts—actual cases involving real harm to real people and businesses—so they can understand the nature of the problem and devise workable solutions.
If you've been harmed by AI-generated content, talking to an experienced defamation lawyer is a good first step. It might be that you have a case that doesn't fit neatly within the confines of existing defamation law, or that another type of legal claim might be a better fit.
Lawyers are allowed to pursue claims that make a good faith attempt to extend or modify the law. This sort of legal boundary testing is precisely what must happen in order to bring defamation laws in line with rapidly changing technology like AI models.