Federal Budget Dashboard
BudgetIran WarNEWDHS/ICEICE BudgetDetention MapAboutBlogContact
Back to Blog
PolicyMarch 27, 202614 min read

Why Lying Shouldn't Be Free Speech

free-speechfirst-amendmentaccountabilitydemocracymisinformation

Lying is wrong. Not just when it causes measurable harm. Not just when a victim can be identified. The act of deliberate deception is itself the violation.

We already know this. Fraud is illegal. Perjury is illegal. False advertising is illegal. Securities fraud carries prison time. In every domain where we've thought carefully about it, we've concluded that knowingly false statements shouldn't be tolerated.

Except one. Political and public speech gets a bizarre exemption. The domain where lies do the most damage to the most people is the one domain where lying is effectively free.

That's incoherent. And it should end.


The Short Version

A car salesman who lies about a vehicle's history goes to court. A company that lies about its products faces the FTC. A witness who lies under oath goes to prison.

But lie to millions of voters about an election? Protected speech.

That's not a principle. That's an accident of history.

The law already says deliberate lies don't deserve protection. We enforce that standard in fraud cases, perjury cases, false advertising cases, and securities fraud cases. Courts and juries determine what's true in these contexts every day.

The only place we don't apply this standard is public and political speech. The one domain where lies cause the most damage to the most people is the one domain where lying is effectively free.

This article argues three things:

  1. The inconsistency is indefensible. We already punish knowing lies everywhere else. There's no principled reason political lies should be exempt.

  2. The law agrees in theory. The Supreme Court has said knowingly false statements aren't protected. The problem is the evidence bar is nearly impossible to clear.

  3. The principle should come first. Before debating enforcement mechanisms, we should establish the principle: deliberate deception should have consequences, regardless of domain.

This isn't about censorship. It's about consistency. If we trust courts to determine truth in a fraud case, we can trust them to do the same when the fraud is committed against the public.


The Incoherence

A car salesman who lies about a vehicle's history faces legal consequences. A company that lies about its products faces FTC enforcement. A witness who lies under oath goes to prison. An executive who lies to investors faces securities fraud charges.

But a media organization that broadcasts claims it knows are false to millions of viewers? A political figure who spreads fabricated stories about election fraud? Someone who creates and distributes AI-generated deepfakes designed to deceive voters?

Protected speech.

The principle we apply everywhere else in law is clear: knowingly false statements made with intent to deceive should carry consequences. But somehow we've carved out the one domain where those lies reach the most people and cause the most harm.

This isn't a free speech issue. It's a consistency issue.


The Philosophical Foundation

The case against lying doesn't depend on proving downstream harm. Kant got this right two centuries ago: lying is wrong because of what it is, not just what it causes.

When someone lies to you, they're not engaging in honest communication. They're manipulating you. They're treating you as an object to be moved rather than a person capable of making decisions based on accurate information.

This is the core of Kant's categorical imperative. Treat people as ends in themselves, never merely as means. A deliberate lie does the opposite. It uses another person's trust and rational agency against them.

The harm isn't just that lies lead to bad decisions. The harm is that deliberate deception denies people their epistemic autonomy.

Everyone has a right to make decisions based on accurate information. That's not a controversial claim. It's the foundation of informed consent, contract law, and democratic participation. When someone deliberately deceives you, they violate that right regardless of whether you can trace specific damages.

Mill's marketplace of ideas argument often gets cited to defend protecting false speech. Let truth and falsehood compete, and truth will win out. That argument made some sense in 1859.

It breaks down when algorithms amplify lies faster than corrections can spread. It breaks down when bad actors flood the information environment deliberately. It breaks down when platforms profit from engagement and outrage travels farther than truth.

The marketplace of ideas assumes roughly equal access and roughly equal amplification. Neither exists anymore.


The Legal Landscape

American law already recognizes that knowingly false statements deserve less protection. The problem isn't the principle. It's the execution.

Sullivan: Right Principle, Broken Application

New York Times v. Sullivan (1964) established the "actual malice" standard. A public figure suing for defamation must prove the defendant knew the statement was false or acted with reckless disregard for the truth.

The decision itself was defensible. It protected civil rights activists from Southern officials weaponizing defamation law. The principle is sound: public discourse requires breathing room, and honest mistakes shouldn't be punished.

What went wrong was the application over time.

Courts interpreted "actual malice" so narrowly that proving it became nearly impossible. The standard requires getting inside someone's head to prove what they knew. Unless you have smoking-gun internal communications showing someone acknowledged a statement was false before publishing it, you lose.

Justices Thomas and Gorsuch have both called for revisiting Sullivan. Thomas argues the decision lacks grounding in the original text or history of the First Amendment. Gorsuch contends that digital media has amplified misinformation in ways that render the actual malice standard inadequate for holding people accountable.

This isn't a partisan issue. The appetite for reform crosses ideological lines.

Alvarez: The Loophole

United States v. Alvarez (2012) struck down the Stolen Valor Act, which criminalized lying about receiving military decorations. Xavier Alvarez falsely claimed at a public meeting that he was a decorated war veteran. The Supreme Court ruled 6-3 that even though the statement was knowingly false, it was protected speech.

This created a massive loophole. You can lie freely as long as no single provable victim exists, even if millions are misled.

But there's a path forward in Alvarez itself. Justice Breyer's concurrence, which provides the controlling logic, said the law failed not because lying can never be regulated, but because this particular law was too broad. A more narrowly tailored statute addressing demonstrated harm could survive.

Congress proved Breyer right by passing a revised Stolen Valor Act in 2013 that did survive judicial review. The case shows the path: it's about how you write the law, not whether lying can be regulated.

Gertz: The Public Figure Trap

Gertz v. Robert Welch (1974) created a sensible distinction. Private citizens only need to prove negligence to win defamation cases, while public figures must prove actual malice.

The problem is who counts as a public figure.

Courts have defined the category so broadly that anyone involved in any public controversy gets treated as a public figure. The protection intended for ordinary citizens has been swallowed by the exception.

Dominion v. Fox: What It Takes

In 2023, Dominion Voting Systems reached a $787.5 million settlement with Fox News over false claims that Dominion machines were rigged. The judge had already ruled that the statements were false. The only question left was actual malice.

Dominion got that far because internal communications showed Fox hosts and executives knew the claims were false while broadcasting them anyway. Hosts texted each other calling the allegations "ludicrous" and "insane" while presenting them on air as credible.

That's what it takes to get accountability: smoking-gun evidence of private acknowledgment that statements are false.

Most liars aren't stupid enough to create that kind of paper trail. The Dominion case is the exception that proves how broken the rule is.


The Diffuse Harm Problem

Here's the core failure of current law: when everyone is harmed but no single victim can be identified, there's no remedy.

Fraud requires a specific victim who suffered specific damages. Defamation requires a specific person whose reputation was harmed. Even securities fraud targets specific investors who relied on false statements.

But when false claims spread to millions of people, corrupting public discourse and undermining democratic participation, no single person has standing to sue. The harm is diffuse. It's everywhere and nowhere.

The broader the lie's reach, the less legal accountability exists.

This is exactly backwards. The lies that cause the most total harm to the most people are the lies that face the least consequence. A local fraud gets prosecuted. A national deception gets First Amendment protection.


International Comparison

Other democracies have drawn these lines differently.

The EU Digital Services Act requires platforms to address disinformation. As of February 2025, the voluntary Code of Practice on Disinformation became a binding benchmark for compliance. Platforms that fail to act on known false content face fines up to 6% of global revenue.

Germany's defamation laws include criminal penalties for knowingly false statements. The NetzDG requires platforms to remove manifestly unlawful content, including defamation, within 24 hours. Companies that repeatedly fail face fines up to 50 million euros.

The UK's Defamation Act 2013 requires plaintiffs to show "serious harm" before bringing claims. But it also provides a robust public interest defense for journalism. The balance is different from the US, not the absence of balance.

Canada's approach has been mixed. Courts have upheld "responsible communication" defenses for journalists while striking down an election law that banned false statements about candidates, ruling it an unjustifiable restriction on free speech.

None of these systems is perfect. But they demonstrate that democratic societies can regulate false speech without collapsing into censorship. The American approach is an outlier, not a model.


The Objections

"Who decides what's true?"

Courts do. They do this constantly.

Every fraud case requires determining what's true. Every perjury prosecution turns on whether someone lied. False advertising enforcement requires FTC staff to decide what's accurate. Securities fraud cases ask juries to determine what executives knew and whether they deceived investors.

We already trust courts and juries with truth determinations in every domain except public speech. The "who decides" objection isn't principled. It's selective.

"Chilling effect on speech"

The chilling effect argument assumes that imposing consequences for lying will deter honest speech. But the current system has its own chilling effect.

When lies face no consequences, truth-tellers self-censor. Why report accurately if competitors can fabricate freely? Why invest in fact-checking if there's no penalty for getting it wrong? Why correct the record if corrections never catch up to lies?

The chilling effect cuts both ways. Right now, we've chosen to chill accountability rather than chill deception.

Legal scholar Leslie Kendrick at UVA has argued that the chilling effect is often an unsatisfactory justification for speaker's intent requirements. The intuition that deliberate lying should be treated differently from honest error doesn't actually stem from chilling effect concerns.

"Slippery slope to censorship"

We already draw these lines everywhere else.

Fraud is illegal. Perjury is illegal. False advertising is illegal. Securities fraud is illegal. None of these carve-outs for intentional deception have led to censorship of legitimate speech.

The slippery slope hasn't slid in any other domain. There's no reason to believe it would slide here if we applied the same standards consistently.

"Political speech is special"

Yes, it is. That's why it matters more, not less.

Political speech shapes elections, policy, and democratic participation. When that speech is deliberately false, the damage extends to everyone who relies on accurate information to participate in self-governance.

The argument that political speech deserves maximum protection assumes good-faith participation in discourse. Deliberate lying isn't good-faith participation. It's fraud on the public.


The State of Reform

Something is changing.

In 2024 and 2025, 46 states enacted legislation targeting AI-generated deepfakes in elections. Over 146 bills were introduced in 2025 alone addressing synthetic media and disinformation.

The approaches vary. Some states require disclosure when content is AI-generated. Others criminalize deepfakes intended to influence elections. Texas and Minnesota enacted laws without exceptions for disclosure.

Not all these laws will survive judicial review. A federal judge struck down California's AB 2839 in August 2025, ruling that mandatory disclaimers on political satire would "kill the joke" and amount to unconstitutional content discrimination.

But the legislative activity shows recognition that the current framework is broken. Courts, legislators, and voters increasingly understand that the information environment of 2026 doesn't match the assumptions of 1964 or even 2012.

At the federal level, the Algorithm Accountability Act would amend Section 230 to impose a duty of care on platforms using recommendation algorithms. The bill requires platforms to responsibly design and operate algorithms to prevent foreseeable harm, with a civil right of action for injured parties.

The House passed a bill in 2025 to sunset Section 230 by 2026. Whether that happens or not, the conversation has shifted from whether Section 230 needs reform to what form reform should take.


The Principle First

This article doesn't propose specific enforcement mechanisms. Who prosecutes? What penalties? How do you structure the cause of action? Those questions matter, but they're secondary.

First, establish the principle: deliberate lies should have consequences.

The domain shouldn't matter. The scale shouldn't create an exemption. The fact that a lie reaches millions instead of one person doesn't make it more protected. It makes it more harmful.

Once the principle is accepted, mechanics can follow. We could codify Breyer's Alvarez concurrence with narrowly tailored statutes for specific high-harm contexts. We could legislatively redefine the public figure doctrine. We could allow circumstantial evidence of knowing falsity: prior corrections ignored, contradictory private statements, financial motive to deceive.

The path forward isn't mysterious. It's the same path we've taken in fraud, perjury, false advertising, and securities law. Apply consistent standards. Require intent. Allow honest error. Punish deliberate deception.


Why This Matters

Democracy depends on shared facts. Not shared opinions. Facts.

When anyone can lie without consequence, facts become optional. When facts become optional, democratic deliberation becomes impossible. You can't have meaningful disagreement about what to do when you can't agree on what's true.

The marketplace of ideas assumes participants are at least trying to be honest. Remove that assumption and the marketplace becomes a con game. The most convincing liar wins.

We don't have to accept that.

Every other functioning legal system has found ways to punish deliberate deception without collapsing into censorship. We do it ourselves in every domain except public speech. The inconsistency isn't principled. It's just inertia.

Free speech is valuable. It enables the search for truth. But deliberate lies contribute nothing to that search. A knowing lie isn't speech in any meaningful sense. It's manipulation disguised as speech.

The First Amendment protects the marketplace of ideas. It doesn't require us to let fraudsters burn the marketplace down.


Sources

Legal Cases

  • New York Times Co. v. Sullivan, 376 U.S. 254 (1964) - Established "actual malice" standard for defamation by public officials
  • United States v. Alvarez, 567 U.S. 709 (2012) - Struck down Stolen Valor Act; Breyer concurrence provides path for narrow regulation
  • Gertz v. Robert Welch, Inc., 418 U.S. 323 (1974) - Distinguished public figure vs. private figure defamation standards
  • Masson v. New Yorker Magazine, Inc., 501 U.S. 496 (1991) - Fabricated quotes can constitute actual malice if they materially change meaning
  • Hustler Magazine v. Falwell, 485 U.S. 46 (1988) - Protected parody that signals its own falseness
  • Dominion Voting Systems v. Fox News Network (2023) - $787.5M settlement; judge ruled statements were false

Current Developments

  • Thomas and Gorsuch call to revisit Sullivan - Justices' statements on reconsidering the actual malice standard
  • State Deepfake Legislation Tracker - Public Citizen tracker of 146+ state bills in 2025
  • California AB 2839 Struck Down - Federal judge rules law unconstitutional (August 2025)
  • Algorithm Accountability Act - Section 230 reform bill in 119th Congress

International Law

  • EU Digital Services Act - European Commission's platform regulation framework
  • Germany's NetzDG - Germany's network enforcement act
  • UK Defamation Act 2013 - Serious harm test and public interest defense
  • Canadian Defamation Law - Library of Congress overview

Philosophical Sources

  • Kant's Moral Philosophy - Stanford Encyclopedia of Philosophy on the categorical imperative
  • Treating Persons as Means - Stanford Encyclopedia on ends vs. means
  • Freedom of Speech - Stanford Encyclopedia overview including epistemic autonomy
  • Leslie Kendrick on the Chilling Effect - UVA Law scholarship on chilling effect doctrine
  • Mill's Marketplace of Ideas - First Amendment Encyclopedia overview

Last updated: March 27, 2026