Wednesday, July 30, 2025
HomePoliticsX Challenges Minnesota Deepfake Law: Free Speech vs. AI

X Challenges Minnesota Deepfake Law: Free Speech vs. AI

Elon Musk, X, Twitter, deepfakes, Minnesota, election law, manipulated media, censorship, First Amendment, free speech, Keith Ellison, Attorney General, political speech, social media, liability, Section 230, internet law, AI, artificial intelligence, content moderation, disinformation, misinformation, election interference, political commentary, regulation, state law, Public Citizen

X Corp. Challenges Minnesota’s Deepfake Election Law, Citing First Amendment Concerns

Elon Musk’s X Corp., formerly known as Twitter, has initiated legal action against the state of Minnesota, challenging the constitutionality of a recently enacted law designed to regulate the use of manipulated media, commonly referred to as deepfakes, in the context of elections. The social media platform argues that the law infringes upon fundamental free speech rights and could lead to excessive censorship of political discourse.

The lawsuit, a 30-page complaint filed on April 23 in the U.S. District Court for the District of Minnesota, names Minnesota Attorney General Keith Ellison as the defendant. X Corp. contends that the law’s provisions are overly broad and vague, creating a chilling effect on political speech by subjecting social media platforms to potential criminal penalties for hosting content that could be interpreted as violating the statute.

Specifically, X Corp. argues that the law’s enforcement mechanism places an undue burden on platforms, forcing them to err on the side of censorship to avoid the risk of criminal charges. According to court documents, there is no penalty for over-censoring content, but severe consequences for failing to remove potentially problematic material. This imbalance, the company claims, incentivizes platforms to suppress a wide range of valuable political speech and commentary, including humor and satire related to elections.

The Minnesota law, enacted in 2023, makes it a crime to share a deepfake within 90 days of an election if the person sharing the content knows, or should have known, that it is a deepfake created without the depicted individual’s permission and with the intent to harm a candidate’s reputation or influence the election. The Minnesota Secretary of State’s Office defines a deepfake as any image, audio, or video that is "so realistic that a reasonable person would believe it depicts speech or conduct of an individual who did not in fact engage in such speech or conduct," and whose production was "substantially dependent upon technical means." The use of artificial intelligence in creating deepfakes is explicitly mentioned in the law’s explanation.

X Corp. asserts that the law’s definition of deepfakes is too broad and could encompass a wide range of innocuous election-related speech, including satire, parody, and political commentary. The company further argues that the law’s imposition of criminal liability on social media platforms for failing to censor such content violates the First Amendment rights of both the platforms and their users.

In a statement, X Corp. emphasized that the law "criminalizes innocuous election-related speech, including humor," and makes social media platforms "criminally liable for failing to censor it." The company argues that the law, rather than defending democracy, would actually erode it by stifling free expression and limiting the public’s access to diverse perspectives on political issues.

Beyond the First Amendment challenge, the complaint also raises concerns about the law’s consistency with federal law. X Corp. claims that the Minnesota statute conflicts with Section 230 of the Communications Decency Act, a federal law that provides broad immunity to internet companies from liability for content posted by their users. Section 230 is considered a cornerstone of the modern internet, allowing platforms to host a wide range of user-generated content without fear of being held responsible for its legality or accuracy.

Furthermore, X Corp. argues that the Minnesota law is unconstitutionally vague, making it difficult for social media platforms to determine what speech is permitted and what is prohibited. The company claims that the law’s lack of clarity creates a "chilling effect" on speech, as platforms are forced to err on the side of caution and censor potentially lawful content to avoid the risk of criminal prosecution.

The lawsuit comes at a time when states across the country are grappling with the potential threat of deepfakes to the integrity of elections. According to Public Citizen, a nonprofit consumer advocacy group, at least 24 states have passed legislation to regulate deepfakes in elections, while another 22 states have similar legislation pending. The growing number of these laws reflects a widespread concern that deepfakes could be used to spread misinformation, manipulate voters, and undermine public trust in democratic institutions.

However, critics of these laws argue that they often go too far, infringing upon free speech rights and potentially chilling legitimate political discourse. They argue that existing laws against defamation and fraud are sufficient to address the potential harms posed by deepfakes and that new regulations could be used to suppress political expression.

The legal challenge brought by X Corp. is likely to have significant implications for the future of deepfake regulation in the United States. The outcome of the case could determine the extent to which states can regulate the use of manipulated media in elections without infringing upon First Amendment rights. It could also influence the interpretation of Section 230 and its applicability to deepfake content.

Attorney General Keith Ellison’s office has acknowledged the lawsuit and stated that it is "reviewing the lawsuit and will respond in the appropriate time and manner." No hearings have been scheduled in the case, according to federal court records. The legal battle is expected to be closely watched by social media platforms, legal scholars, and election officials alike, as it raises fundamental questions about the balance between protecting free speech and safeguarding the integrity of elections in the digital age. The suit adds fuel to the ongoing debate about the responsibilities of social media platforms in policing content and the appropriate level of government regulation of online speech.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular