Site icon The Daily Signal

Supreme Court Gives Ominous Forecast for State Laws Regulating Social Media

A woman plays with the logos of TikTok, Facebook, YouTube, Instagram, and X. (Didem Mente/Anadolu/Getty Images)

The modern public square is private.

That paradox is the lesson handed down by the Supreme Court in NetChoice v. Paxton and Moody v. NetChoice, two cases in which the world’s largest social media empires challenged state laws in Texas and Florida that curtailed their practice of online content moderation.

Just a few terms ago, the court observed that “social-media platforms have become the modern public square.” But on Monday, at least five justices opined that the platforms’ central content features—for example, YouTube’s homepage or Facebook’s newsfeed—are a “distinctive expressive product.” 

Thus, under the majority’s interpretation of the First Amendment, laws that protect access to those popular speech forums are likely unconstitutional because they detract from the companies’ prerogative of absolute private control.

The court’s actual holding was procedural: It remanded both cases back to the lower courts for consideration of whether the laws have a substantial number of unconstitutional applications. All nine justices concurred in that outcome.

But as with the Murthy v. Missouri decision issued last Wednesday, what appears to be a procedural ruling in fact has significant import for the future of open public discourse

Major social media platforms have amassed great fortunes and incalculable amounts of cultural influence based on their ability to host, curate, promote, and distribute the expression of others. 

The platforms already enjoy immunity from civil liability for content published on their sites under Section 230 of the Communications Decency Act, a 1996 relic of the pre-social media era. To gain a still greater exemption from public accountability, the platforms argued that their business is inherently expressive and thus protected by the First Amendment from all laws that would detract from their ability to promote or remove the content that their users post.

Florida and Texas saw things differently. In 2021, Florida enacted state Senate Bill 7072, taking aim at several common ills by requiring publication and consistent application of standards for content, as well as requiring social media platforms to host political candidates and journalists.

Texas followed shortly thereafter, enacting HB 20, a bill drafted in the same spirit, but with a slightly different approach. The Texas law classified social media platforms with at least 50 million users as “common carriers” that were prohibited from censoring users based on the content of their speech or their affiliation(s). The bill also obliged designated common carriers to disclose content moderation practices publicly.   

Both laws faced challenges from NetChoice, an industry association representing large social media empires like Meta, X, and Google, as well as large e-commerce platforms, such as Uber and Etsy. NetChoice filed suit before either state attempted to enforce its law, claiming that both laws were facially unconstitutional violations of the platforms’ First Amendment freedoms.

The laws met different fates in the lower courts. The 11th U.S. Circuit Court of Appeals enjoined enforcement of Florida’s law, holding that the platforms’ content-moderation decisions were “protected exercises of editorial judgment.” Texas, however, prevailed in the 5th Circuit, which held that NetChoice’s members had no “freewheeling First Amendment right to censor what people say.”

Justice Elena Kagan—joined by Chief Justice John Roberts and Justices Sonia Sotomayor, Brett Kavanaugh, and Amy Coney Barrett (with Justice Ketanji Brown Jackson joining only in part)—rejected the 5th Circuit’s approach and implicitly condoned the 11th’s.

Determined to provide guidance to the lower courts on remand, Kagan wrote that Texas’ law, in its most significant applications, is unlikely to pass constitutional muster because it invades the protected realm of editorial “discretion” or “judgments,” the sort of expression-based liberty interests the court had previously countenanced when asserted by the editors of local newspapers.

Kagan set forth three principles likely to prove dispositive when the cases return to the lower courts: First, “compiling and curating others’ speech” is expressive; second, editing or moderating remains expressive, even if it excludes just a few posts among the millions; and third, governments cannot justify regulation of content moderation based on an interest in “better balancing the marketplace of ideas.”

Although the object of her criticism is the 5th Circuit’s decision and Texas’ law, the substance of her critique applies with identical or greater force to Florida’s law. Thus, readers of Kagan’s majority opinion should not conclude that Florida is on firmer ground, only that the 11th Circuit is, at least in Kagan’s estimation. 

Although Kagan acknowledged that the advent of social media makes for novel applications of the court’s pre-digital decisions, she was adamant that the inquiry is unchanging: Does a law mandating access for users “alter or disrupt” the platform’s own expressive conduct? The First Amendment’s principles, she maintained, “do not vary.”

Justice Samuel Alito, concurred in the judgment, but explained in a separate opinion joined by Justices Clarence Thomas and Neil Gorsuch that there is no need to vary the First Amendment’s principles to question the uncritical ease with which the majority applied them to the novel practices of businesses such as Facebook.

Alito acknowledged the continuing force of decisions recognizing that editing or compiling the speech of others can itself receive First Amendment protection. He noted, however, that unlike speech written or spoken, not all compiling or editing is inherently expressive because it does not necessarily convey a message. 

Alito, like Kagan, offered his own three principles to the lower courts: First, platforms must demonstrate that they actually select and edit user content, not just passively host it; two, the resulting content compilation must make “some sort of collective point”; and three, the platforms must show that communication of that collective message is impaired by the requirement to host other speech. 

Under that rubric, Alito had no difficulty concluding that NetChoice failed to meet its burden of proving the laws were unconstitutional in a substantial number of applications.

NetChoice was evasive about which of its members were affected. Those members it admitted were covered use a disparate array of content-moderation policies, some of them user-based, applied to a multiplicity of user functions. More importantly, the court’s majority had no factual basis for the necessary premise of its First Amendment guidance—that the algorithms used for certain kinds of content moderation were in fact expressive in a way analogous to the judgment used by human newspaper editors.

It’s questionable whether there is any “message” in the morass of content social media platforms leave available for public consumption. More doubtful still is the notion that whatever message the platforms have is somehow impaired by an obligation to host user content.

Left undecided by today’s opinions are whether the states can regulate social media platforms as common carriers akin to phone service providers. If forced to answer, Kagan and the justices joining her majority would probably use the same First Amendment reasoning to reject the common carrier rationale.

But perhaps on remand, the states can wrest enough information from the platforms about their algorithms to demonstrate how little these computational processes resemble the naturally expressive turns of human thought, the real object of the First Amendment’s solicitude.

Such a showing could buttress the states’ argument that platforms are more like regulated telecom companies than they are like a newspaper’s editorial board or the academics making selections for the latest literature anthology.    

Kagan alludes to one other possibility for checking social media’s empire of influence; namely, government authority to enforce “competition laws to protect that access.” Efforts to do so at the federal level have yet to make much of a dent, and they face headwinds from decades of antitrust jurisprudence that have strayed from the original purpose of laws such as the Sherman Act of checking monopoly power.

But if a majority of the court is determined to treat the broad sweep of algorithmic content moderation as sacrosanct under the First Amendment, then enforcing content-neutral competition laws is the only other apparent option for curbing the outsized influence of tech moguls in the national discourse.

Exit mobile version