First Amendment & Social Media Censorship: Breaking Down NetChoice, LLC v. Paxton

First Amendment & Social Media Censorship: Breaking Down NetChoice, LLC v. Paxton

Social media platforms have become some of the most prominent arenas for debates over freedom of speech in recent years. While the First Amendment is intended to protect citizens from the government’s infringement of our rights to freedom of speech, the law is less clear about what role the First Amendment plays in protecting speech expressed on web-based platforms owned and operated by private companies like Meta (Facebook/Instagram), Twitter, TikTok, and the like. 

Recent rulings from the nation’s highest court may offer some insights on how the justices interpret the First Amendment as it applies to censorship on social media platforms.

On May 31, the Supreme Court granted an emergency stay request to tech industry companies that petitioned against the Texas Legislature’s House Bill 20. The bill, originally passed in September of 2021, would prohibit social media platforms from blocking, removing or somehow discriminating against users’ posts on the basis of political views. Notably, the Supreme Court implementation of this bill was blocked in a 5-4 decision by a surprising blend of liberal and conservative justices: Chief Justice John Roberts and Justices Breyer, Sotomayor, Kavanaugh, and Barrett (you can read Justice Alito’s reasons for dissenting here). 

While the question of the law’s constitutionality remains unresolved for now, the high court’s ruling means that the law will not be able to take effect while the case—NetChoice, LLC v. Paxton—continues through the Fifth U.S. Circuit of Court of Appeals.

Given the significance of this case for freedom of speech, let’s dive deeper into some* of the arguments presented by both sides to understand the potential implications for both the First Amendment and social media censorship.

*Note: This blog post covers only a fraction of the hundreds of documents filed for this case so far. It is not intended to be a comprehensive assessment of all arguments involved, but rather an overview of First Amendment-related arguments both for and against government regulation of social media companies, as well as the potential implications this case could have for freedom of speech.

Arguments Presented Against H.B. 20 

In the initial application to block the implementation of H.B. 20 filed on May 13, 2022, the applicants (NetChoice, LLC and the Computer and Communications Industry Association) presented several arguments related to three central claims about how H.B. 20 would. The first claim was that the bill would negatively impact business. The applicants explained this by saying, “because there is no ‘off-switch’ to platforms’ current operations, the cost of revamping the websites’ operations would undo years of work and billions of dollars spent on developing some platforms’ current systems” (p. 3). 

Their second claim was that H.B. 20 would lead to the proliferation of “objectionable viewpoints.” The applicants cited several high-profile examples from recent years including Russian propaganda about the invasion of Ukraine, ISIS propaganda, pro-Holocaust content and posts encouraging disordered eating among children and adolescents.

Finally, the applicants claimed that this bill would infringe on editorial freedom of the platforms. They explained this saying the bill would, “impose] related burdensome operational and disclosure requirements designed to chill the millions of expressive editorial choices that platforms make each day” (p. 1).” Their supporting arguments for this claim repeatedly cited the Supreme Court precedent established by Reno v. ACLU (1997) to justify their arguments about social media companies’ editorial control over content published and disseminated on their websites. The applicants pointed out that all platforms have their own hate speech policies, and also “engage in speech they author themselves, through warning labels, disclaimers, links to related sources, and other commentary they deem important” (p. 8).

Amongst the many direct references to the First Amendment, the applicants cited Supreme Court precedents set by Tornillo, PG&E and Hurley to argue that the First Amendment “[protects] the rights of private entities (a newspaper with market power, a monopoly public utility, and parade organizers) not to disseminate speech generated by others (candidates, customers, and parade participants)” (p.19).

After the initial application was filed, multiple parties submitted additional supporting documents in the middle of May. In one such filing submitted on May 17, the summary of the applicants’ argument was as follows:

“HB20 will have an unprecedented detrimental effect on online platforms as we know them. It will transform social media platforms into online repositories of vile, graphic, harmful, hateful, and fraudulent content, of no utility to the individuals who currently engage in those communities. And it will flood otherwise useful web services with wasteful and irrelevant content. A single-sentence, unreasoned order is an unwarranted way to mandate this devolution.”

There were dozens of pages of documents filed on behalf of the applicants prior to the Supreme Court’s decision to issue the emergency stay order. For more detailed information on the applicants’ arguments, you can find them here.

Arguments in Favor of H.B. 20

On May 18, Attorney General of Texas Ken Paxton filed a response to the applicants. Some of the key arguments brought up by the respondent included: 

  • Citizens ought to be guaranteed access to the “modern public square” . The argument here is that removing content or blocking users on the basis of their expressed viewpoints may constitute an exclusion from the digital public sphere.
  • The “Hosting Rule” in H.B. 20 does not prohibit social media platforms from removing entire categories of content. For example, the respondent argued that these companies could ban all foreign government speech if they don’t want to host Russian propaganda about the invasion in Ukraine (as long as the content bans are applied equally). Furthermore, Paxton argued, H.B. 20 only applied to expressions shared or received in Texas specifically.
  • The “Hosting Rule” does not implicate the First Amendment because it “regulates conduct, not speech—specifically, the platforms’ discriminatory refusal to provide, or discriminatory reduction of, service to classes of customers based on viewpoint” (p. 21). The respondent also argued that, even if the First Amendment is implicated by H.B. 20, social media companies may be viewed as “common carriers” for communications because they “hold themselves open as willing to do business with all comers on equal terms; they are communications enterprises; they are demonstrably affected with a ‘public interest’; and they enjoy statutory limitations on liability” (p. 26). 
  • Social media companies have repeatedly claimed that they do not publish nor edit content (which offers them some legal immunity under Section 230 of the Communications Decency Act of 1996), but they allegedly contradicted themselves when arguing that “HB 20 limits their editorial discretion over user content in violation of the First Amendment” (p. 14).

First Amendment Issues to Consider in Wake of NetChoice, LLC v. Paxton

As mentioned at the beginning of this post, the Supreme Court has not determined whether H.B. 20 infringes upon the First Amendment rights of social media companies. By granting the emergency stay request, the justices effectively blocked the law from taking effect while the lower courts continue to assess its constitutionality. This process could take several months to resolve, so in the meantime, we ask you to reflect on the following questions related to arguments presented by each side in NetChoice, LLC v. Paxton:

  1. If large social media platforms may be considered “common carriers” for public discourse, should they be required to host [what may be considered] hate speech and/or graphic content on their platforms? 
  2. By what standards should social media companies determine the acceptability of content published and/or disseminated on their platforms? In other words, how could these platforms realistically determine what is merely an expression of speech versus potentially/actually harmful content?
  3. Should internet-based, user-generated content platforms be held to the same rules and standards as print based news platforms? Should Supreme Court precedent arising from the unanimous decision in Miami Herald Publishing Co. v. Tornillo (1974) apply?
  4. Should we consider repealing Section 230 of the Communications Decency Act and hold social media companies liable for what their users publish or disseminate on their platforms? 

We welcome you to share your thoughts, insights or additional questions you have about this case in the comments section below this post!

Fact or Fiction: How Misleading Statistics Contribute to Polarization and What We Can Do About It

Fact or Fiction: How Misleading Statistics Contribute to Polarization and What We Can Do About It

FACT: 100% of people reading this will continue to read beyond this sentence.

Perhaps the above statement isn’t true, but how would you, the reader of this blog, be able to test its validity and reliability anyway?

Simply labeling something as a “fact” and citing a statistic can have an enormously persuasive influence on audiences; not even professionals, journalists or those with excellent statistical reasoning skills are immune to what are commonly referred to as statistical fallacies.

One of the biggest problems with statistics shared by news stories, blogs, podcasts and other outlets of information is that it’s not only difficult to determine how accurately the author interpreted the data but we also don’t know whether there were issues with the study’s methodology that might have produced misleading data.

After all, some scientists have admitted to falsifying or fabricating data, which may be due to the fact that researchers feel enormous pressure to produce statistically “significant” findings in order to receive grant funding for their work or get published (which is often a requirement for tenured researchers at academic institutions). Furthermore, the ​​Shorenstein Center on Media, Politics, and Public Policy at Harvard’s survey of 1,118 journalists in 2015 found that while 80% of respondents agreed that knowing how to interpret statistics from sources is important, just 25% of respondents said they felt “very” well-equipped to interpret data on their own.

So what does all of this mean?

In short, most of us are not well-equipped to interpret statistical information. No human being is completely free from cognitive biases, and the processes of motivated reasoning often lead us to quickly accept information that aligns with our preexisting beliefs while taking more time to scrutinize and criticize information that contradicts our beliefs.

None of us have the time or energy to double-check all statistics we encounter on a regular basis. However, for moments where those numbers really matter to you – such as different efficacy rates among Covid-19 vaccines or job salary ranges – there are some useful, time-saving strategies for evaluating the accuracy of statistical information.

For starters, go to the original source of the information to confirm that the author of whatever you’re reading or listening to is interpreting the data correctly. You don’t need to be a data scientist or mathematician to understand the basics of statistical findings. Some things to be on the lookout for in the original study include:

Who Supported the Research: Did this information come from a peer-reviewed academic journal funded by grants or was it produced and funded by a company with a financial conflict of interest? In other words, what is the purpose or incentive for the organization(s) involved to contribute to the study?

Let’s unpack this with an example.

PLOS Medicine* published an article in 2013 entitled, “Financial Conflicts of Interest and Reporting Bias Regarding the Association between Sugar-Sweetened Beverages and Weight Gain: A Systematic Review of Systematic Reviews.” The review of the research found that studies with financial conflicts of interest (funded by companies like Coca-Cola and PepsiCo) were 5 times more likely to report there was no significant link between the consumption of sugary beverages and weight gain or obesity, compared to studies with no conflicts of interest.

*To practice what I’m preaching here, I originally found the study cited in a New York Times piece but went to the original study to confirm that the NYT’s depiction of this study was accurate.

Who Were the Participants and How Many: In academic and scientific research, you can typically find information pertaining to the background and number of participants in the “Methods” or “Methodology” section of an article. Participants’ demographic information (e.g., gender, race, age, income level, geographic location) and the study’s sample size (number of participants surveyed/studied) can help you determine whether the researchers’ inferences are accurate.

One example of a study that produced misleading data is LendEDU’s survey of 1,217 college students (2017), which found that “nearly a third of Millennials have used Venmo to pay for drugs.” A major problem with this survey is that it did not clearly define its participants (there’s no universal definition of who a “Millennial” is and even if we were to define Millennials as born sometime between 1981 and 1996, this would fail to account for the fact that not all college students are in this age bracket). While the study claimed that its sample size of 1,217 survey respondents was representative of the population of college students in the U.S. (roughly 20.5 million at the time), the Pew Research Center says there are approximately 72.1 million Millennials.

So what are we supposed to believe: almost ⅓ of college students use Venmo to buy drugs or almost ⅓ of Millennials? The two terms are not synonymous and this goes to show why number of participants and how they’re defined are critically important issues for evaluating whether a study accurately portrays the attributes, attitudes and/or behaviors of a given group of people. Unfortunately, a Google search about this study reveals that dozens of journalists and bloggers hastily shared these findings without scrutinizing how the research was conducted in the first place. This is just one of many examples of how reporters lacking scientific backgrounds or statistical reasoning skills can (often inadvertently) spread misinformation to their audiences.

Additional Resources for Developing Your Knowledge of Data Journalism and Statistical Reasoning Skills:

  • This 10-minute video from Crash Course Statistics is one of the most beginner-friendly tutorials on the subjects of scientific journalism and how data might be misrepresented by news publications.
  • The Challenge of Developing Statistical Reasoning: This article was published in the Journal of Statistics Education (2002) and offers an eye-opening glimpse at the variety of correct and incorrect forms of statistical reasoning you’ve probably seen before.
  • Data Journalism, Impartiality and Statistical Claims: This BBC Trust-commissioned study was published in Journalism Practice (2017). While the researchers acknowledged that the “use of data is a potentially powerful democratic force in journalistic inquiry and storytelling, promoting the flow of information…enriching debates in the public sphere” (p. 1211), the study revealed politicians and business leaders in the UK often cited statistics in media, but few journalists or members of the public questioned or verified those claims.