Shoot the Messenger: Why Section 230 Does Not Shield Suggestive Content Delivery

Internet companies have frequently relied on Section 230 of the Communications Decency Act of 1996 to avoid liability for third-party content hosted on their platforms. However, over time, companies began to take advantage of the broad cover of Section 230 in circumstances outside the statute’s original scope. This Note advocates for a more nuanced interpretation of the statute as it applies to suggestive algorithms and offers a proposal for amending Section 230 to better reflect the modern digital landscape.

Introduction

Following the conclusion of Super Bowl LVII in 2023, a team of Twitter1 engineers was summoned late at night to solve a “high urgency” problem.2 In a Slack message, James Musk, cousin of owner and CEO Elon Musk, wrote that there was an “issue with engagement” throughout the platform.3 The problem? President Biden’s tweet supporting the Philadelphia Eagles had generated nearly 29 million impressions, while Elon Musk’s own tweet supporting the team generated only 9.1 million impressions.4 Musk, who deleted the tweet in apparent frustration, flew back to Twitter’s headquarters to demand answers and oversee a “fix” to the issue.5

Working into the morning, Twitter engineers deployed code that artificially boosted Musk’s tweets and allowed them to bypass Twitter’s filters designed to show users relevant content.6 Musk’s tweets were boosted “by a factor of 1,000,” ensuring that they ranked higher than anyone else’s in a user’s feed.7 The site’s algorithm was also modified to allow Musk’s tweets to overcome safeguards intended to prevent a single account from flooding Twitter’s “For You” feed.8 By Monday afternoon, Musk’s tweets and replies dominated the feeds of millions of Twitter users—whether they followed him or not.9

Since purchasing Twitter in 2022,10 Musk has shown a willingness to alter the site’s algorithm to influence what content is shown to users.11 After restoring the account of conspiracy theorist Alex Jones in December 2023, X’s algorithm actively promoted the account to other users, and conspiratorial posts from Jones’ account appeared in the “For You” feed of users who did not follow him on the platform.12 Jones is known for advancing the conspiracy that the 2012 Sandy Hook shooting massacre was a hoax.13 In a livestreamed interview on X, promoted by Musk, Jones made false claims regarding his harassment of Sandy Hook victims’ families, claiming that he was “just asking questions.”14

Suppose that Jones had made false and defamatory statements concerning the shooting in a post on X, which in turn was amplified by the site’s algorithm and shown to users who were not following Jones and who had not expressed interest in the post’s subject matter.15 Jones, as the author of the post, would be liable for its content.16 Could X––the platform whose algorithm artificially boosted and promoted the post to others––also face liability for defamation and harassment, if Jones’ post appeared in the “For You” feed of one of the grieving parents? The answer depends on a twenty-six-word statute known as Section 230.17

Internet companies have frequently relied on Section 230 of the Communications Decency Act of 1996 to avoid liability for third-party content hosted on their platforms.18 Undoubtedly, many aspects of the internet would be unrecognizable without Section 230, as it has played a major role in allowing free-flowing public expression.19 Internet companies would have struggled to grow their platforms if they could be held liable for defamatory statements made by others.20 However, over time, companies began to take advantage of the broad cover of Section 230 in circumstances outside the statute’s original scope.21 One example is Section 230’s application to recommendation algorithms.22

Social media algorithms were born with Facebook’s introduction of ranked, personalized news feeds in 2009.23 Today, many of the most popular internet services, such as search engines, streaming services, and social media platforms, rely on algorithms to personalize each user’s experience and identify the content with which they will most likely engage.24 What a user sees on the internet is often the product of an algorithmic process choosing from a large amount of content or information.25 These algorithms, now found everywhere on the internet, were practically absent from public use when Section 230 was enacted.26

In recent years, a debate has emerged over the scope of Section 230 and whether internet companies are entitled to its broad immunity when they use algorithms to deliver third-party content.27 Some maintain that recommendation algorithms should not be excluded from Section 230 immunity.28 Others question whether Section 230, as written, applies to modern tools such as algorithms,29 arguing that recommendation algorithms have allowed platforms to take a more active role in how content is shared and to whom it is distributed.30

While attempts have been made to distinguish between the different types of recommendation algorithms,31 these arguments have generally treated all instances of their use the same, despite the fact that some are much more “suggestive” than others.32 In reality, recommendation algorithms operate on a spectrum: Some, such as those used to filter search results and recommend friends, are seemingly innocuous and work to enhance the user experience.33 Others, such as those used to curate news feeds and promote video content, have a more profound impact and can facilitate the spread of misleading or defamatory information.34

Because it overlooks these important differences, Section 230 is insufficiently nuanced, and it confers a great deal of interpretive authority to the judiciary.35 Courts, in turn, have also failed to grasp these distinctions.36 From a textualist and policy perspective, the best interpretation of the statute is that immunity is not conveyed if the algorithm used to deliver the third-party content is so suggestive that it generates a message of its own.37 By contrast, other, less-suggestive algorithms may be entitled to immunity.38 And while this interpretation is both wise and within the statute as it is written, it would be better for Congress to amend the statute in its entirety to eliminate this ambiguity.39

Part I of this Note begins by taking a deep dive into Section 230, examining its origins and the intent of those who helped create it.40 Part I then looks at how courts have traditionally interpreted the statute and considers previous attempts by Congress to amend it.41 To better understand the role that algorithms play in delivering content, Part I will also explain the different types of algorithms used by internet companies and compare some of the algorithms used by popular social media platforms.42 Part II analyzes the different approaches taken by judges in newer Section 230 cases involving algorithmic recommendations and evaluates the merit of some of the proposed legislative changes.43 Part III advocates for a more nuanced interpretation of the statute as it applies to suggestive algorithms and offers a proposal for amending Section 230 to better reflect the modern digital landscape.44

I. Background

A. Section 230

Section 230 has been referred to as “the twenty-six words that created the internet.”45 It has provided broad immunity to service providers and internet companies since 1996.46 The relevant provision of the statute states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”47 Section 230 updated federal telecommunications law for the first time in over sixty years.48 Despite its modern importance, Section 230 was only one small part of a larger legislative package: the Communications Decency Act of 1996, which was itself part of the Telecommunications Act of 1996.49 Congress hoped that Section 230 would promote the continued development of the internet and encourage the removal of offensive or illegal content.50

1. The Legal Landscape Prior to 1996

Prior to the internet, courts consistently held that publishers (who were expected to be aware of the material they were publishing) are liable for any illegal content they published, while distributors (who are likely unaware of the content) enjoy immunity.51 For example, while a newsstand could not be held legally responsible for illegal content found in one of the newspapers they sold, the publisher of the newspaper could be.52 This is because a person who “delivers or transmits defamatory matter published by a third person” is liable for defamation if they “knew or had reason to know” that the content was defamatory.53 Under the Second Restatement of Torts, a publisher could therefore be held liable if they acted negligently, even if they lacked knowledge that the content was defamatory.54 While the difference between a publisher and a distributor appeared at one point to be black and white, the development of the internet has blurred the line between the two.55

By today’s standards, internet experiences in the 1990s were “underwhelming” and “static.”56 Websites rendered a consistent page to all users, no matter who they were or where they were accessing the website from.57 Internet forums allowed users to discuss various topics and interact with content created by others.58 At the start of the decade, internet platforms were considered “passive conduits” for users’ communications, akin to telephone companies.59 The question for courts was whether these platforms were publishers of third-party content such that they could be held liable for negligence in publishing it, or whether they were simply distributors of it.60

Two cases decided in the early 1990s addressed this question.61 These cases involved CompuServe and Prodigy, two of the biggest information content providers at the time.62 CompuServe did not moderate content posted on its website by other users.63 By contrast, Prodigy did have moderators and tried to prevent illegal content from being shared on its platform.64 Both were sued for defamation based on third-party content hosted on their sites.65 In Cubby, Inc. v. CompuServe, Inc., the owner of a media news outlet sued CompuServe for hosting a forum in which allegedly defamatory material was posted about the news outlet.66 The Southern District of New York held that because CompuServe allowed all content on its website to go unmoderated, it was a distributor and was therefore not liable for libelous content posted by its users.67 Conversely, a New York state court held in Stratton Oakmont, Inc. v. Prodigy Services Co. that because Prodigy had taken an editorial role by moderating its users’ posts, it was a publisher and was liable for libelous third-party content.68 The court wrote that Prodigy differentiated itself by “exercis[ing] editorial control over the content of messages” posted on its bulletin boards and likened the company to a newspaper.69 Ironically, by trying to prevent illegal content, Prodigy ended up exposed to more liability than it would have faced if it had enabled a free-for-all.70 The message for content providers was therefore to take a hands-off approach to content on their platforms.71

2. Legislative Intent Behind Section 230

Stratton Oakmont directly led to Section 230’s introduction in Congress a year later.72 Then-representatives Chris Cox and Ron Wyden disagreed with the decision, arguing that online service providers should be encouraged and able to exercise control over what users post on their platforms.73 But Cox and Wyden hoped that Section 230 would do more than simply overturn Stratton Oakmont.74 The two aimed to create legislation that would encourage free speech on the internet and allow online service providers to moderate content without fear of liability.75 Cox envisioned a “cleaner” web, where it was more difficult for users to encounter obscene or offensive content.76 He feared that, under existing standards, companies trying to clean such content risked opening themselves up to multi-million dollar lawsuits; this risk would discourage companies from acting and the internet would eventually devolve into the “Wild West.”77 The pair also reasoned that the growth and development of the internet as a whole would be limited if startups and emerging companies had to worry about liability for third-party content.78 Seeing its importance, the House passed Section 230 (then referred to as the Cox-Wyden Amendment) by a vote of 420-4.79

Proponents of Section 230 maintain that it has succeeded in its goals and allowed for the creation of a “vibrant” social networking environment online.80 Without Section 230, they argue, many aspects of the internet would be “unrecognizable.”81 Cox and Wyden would likely agree, having recognized the internet as a “unique” opportunity for cultural development and “extraordinary” advance in the availability of informational resources.82

3. Traditional Interpretations and New Approaches

Section 230’s straightforward language and limited exceptions gave courts significant flexibility in interpreting the scope of immunity provided to online service providers.83 At first, courts applied this immunity broadly.84

A year after the passage of Section 230, the Fourth Circuit issued an opinion in Zeran v. America Online, Inc., holding that the statute creates a federal immunity “to any cause of action that would make service providers liable for any information originating with a third-party user.”85 In that case, an anonymous post on an America Online bulletin board alleged that an individual named “Ken” was selling offensive t-shirts related to the 1995 Oklahoma City bombing.86 The post included a phone number which was the home phone number of the plaintiff, Kenneth Zeran, who claimed to have received angry calls and death threats.87 Zeran sued America Online, seeking to hold them liable for the allegedly defamatory speech hosted on their platform.88 This marked the first time that a federal appellate court interpreted the scope of Section 230,89 and the court concluded that lawsuits seeking to hold a service provider liable for “traditional editorial functions,” such as deciding whether to publish, were barred.90

Zeran quickly caught the attention of litigants, courts, and legal scholars, sparking a new debate over the scope of Section 230.91 The Zeran court’s interpretation quickly became the dominant one,92 with many believing it to be the most influential.93 Opponents argued against the defendant-friendly reading of the statute, urging courts to apply Section 230 immunity more narrowly.94 However, courts rejected most of these challenges and continued to apply Section 230 protections broadly through the early 2000s.95

Over time, courts began applying broad immunity not only to service providers, such as America Online, but also to websites and other online platforms that hosted user-generated content, which more closely resemble modern social media platforms.96 In 2003, the Ninth Circuit ruled in Carafano v. Metrosplash.com, Inc. that Matchmaker.com was entitled to immunity from a lawsuit arising out of a user’s creation of a fake profile that used photographs of the plaintiff actress.97 Earlier that year, in Batzel v. Smith, the Ninth Circuit had extended Section 230 immunity to online platforms that made a “voluntary and affirmative” decision to display user content.98 There, the court immunized the Museum Security Network (“the Network”), a website and email listserv concerning stolen artwork, when it modified and published a defamatory email that it had received.99 The majority reasoned that Section 230 afforded immunity to the Network because it did not create the defamatory content.100 However, Batzel differed from previous cases in that the online platform took affirmative steps to review and edit the third-party content.101 Judge Ronald Gould, in a partial concurrence and partial dissent, reasoned that because the Network had selected and edited certain content, the content was no longer “information provided by another” and therefore not entitled to Section 230 immunity.102 He wrote that selecting particular information for distribution adds a person’s “imprimatur” to it,103 and he argued that a defendant who actively selected libelous information for distribution should not be entitled to immunity, even if the information was “provided by another.”104 The majority disagreed, holding that the Network’s minor alterations did not constitute “development of information.”105

A few years later, as online platforms took on a more active role in content curation, the Ninth Circuit attempted to curtail the scope of Section 230 immunity.106 The Fair Housing Councils of San Fernando Valley and San Diego sued Roommates.com, alleging that the site’s business practices violated the federal Fair Housing Act (FHA) and state housing discrimination laws.107 Roommates.com, attempting to match compatible roommates, required users to complete a profile with information about themselves such as their sex and sexual orientation.108 The website allowed users to search within the Roommates.com network for one or more of these criteria, and Roommates.com periodically sent emails indicating availability of housing that matched a user’s preferences.109 In response to claims that its business violated the FHA, Roommates.com argued that Section 230 immunity applied because the allegedly violative (discriminatory) content was provided by its users.110 The Ninth Circuit found Roommates.com only partially immune, not affording safe harbor protection to the website’s search function or email notifications.111 The “Additional Comments” section of profile pages, which allowed users to provide additional information beyond the required fields, was eligible for Section 230 immunity.112 The court reasoned that, since every profile page was a “collaborative effort” between the website and the user, Roommates.com was at least partially responsible for the content’s development.113 Here, the court first articulated the “material contribution test,” which bars Section 230 immunity if the service provider or online platform “makes a material contribution to the creation or development of content.”114 By contrast, “neutral tools” that do not enable such contribution do receive immunity, as long as they are identically applied in all contexts.115

What constitutes “development” of content remained unclear, however. The Tenth Circuit attempted to answer that question in F.T.C. v. Accusearch Inc.116 Accusearch owned and operated Abika.com, a website that allowed users to access and obtain personal information for members of the public.117 The site acted as an intermediary, facilitating users’ acquisition of information from “third-party researchers.”118 Accusearch sought Section 230 immunity, arguing that it was being treated as the publisher of information provided by a third-party.119 The court, in evaluating whether Accusearch partially developed the content, defined “develop” as “the act of drawing something out, making it ‘visible,’ ‘active,’ or ‘usable.’”120 Since Accusearch made the information “visible” by exposing it to public view, the court reasoned that the company was partially responsible for its development, meaning that Section 230 immunity did not apply.121

Roommates.com and Accusearch Inc. have been instrumental in shaping modern Section 230 jurisprudence.122 By focusing on a platform’s role in development, these cases “laid the foundation for applying Section 230 immunity to algorithmic recommendations.”123 Subsequently, in 2019, the D.C. Circuit became the first appellate court to do so.124

In Marshall’s Locksmith Service Inc. v. Google, LLC, the court granted Google immunity for using a neutral algorithm to translate fraudulently provided third-party information onto a map.125 Fourteen locksmith businesses alleged that Google violated false advertising and antitrust statutes by accepting advertising revenue from scam locksmiths that provided fake information.126 Google’s search algorithms provided this information to users in the form of pins on a map, in place of legitimate listings.127 The court held that Google’s display of the scam locksmith locations did not “enhance” the third-party content, and that it was simply a translation of third-party information.128 Marshall’s Locksmith thus expanded the “neutral tools” framework of Roommates.com to include neutral algorithms, as long as they are applied consistently.129

Shortly after Marshall’s Locksmith was decided, the Second Circuit analyzed whether Facebook’s recommendation algorithms were responsible for “developing” Hamas content.130 In Force v. Facebook, Inc., the plaintiffs alleged that the platform’s tools and algorithms enabled the terrorist group to quickly disseminate its content, which they argued enabled deadly attacks on U.S. citizens in Gaza.131 The court did not find Facebook liable because it did not alter the information published by Hamas, and the algorithms it used to arrange and display the content were deemed to be sufficiently neutral.132 The majority opinion reasoned that “merely arranging and displaying others’ content . . . through such algorithms” was not enough to hold the provider liable for developing or creating the content.133

In his partial concurrence, Chief Judge Katzmann questioned whether Facebook’s algorithms developed content through its recommendations.134 He argued that Section 230 immunity should not apply to Facebook’s friend and content suggestion algorithms because the company used the algorithms “to create and communicate its own message.”135 The mere fact that Facebook’s algorithms relied on third-party content should not have been enough to trigger the protections of Section 230, Chief Judge Katzmann reasoned.136 To fall within the scope of Section 230, he believed, the claim must “inherently fault the defendant’s activity as the publisher” of the specific third-party content.137

The Ninth Circuit followed the Second Circuit’s lead in Gonzalez v. Google LLC.138 In this case, the family of Nohemi Gonzalez, a U.S. citizen killed by ISIS, alleged that Google was directly liable for providing support to a terrorist organization by allowing ISIS to monetize videos uploaded to YouTube and promoting those videos through its algorithm.139 The Ninth Circuit ruled in favor of Google, finding that its algorithm did not develop the third-party content.140 The plaintiffs’ complaint lacked any allegations that Google designed its website to promote videos that furthered the terrorist group’s mission.141 Rather than deliberately suggesting certain kinds of content, YouTube’s algorithm neutrally matched what it knew about users based on their historical actions and sent third-party content that it anticipated the user would prefer.142 The court described the algorithm as simply a more sophisticated search engine, as it selected the particular content provided to a user based on that user’s inputs.143 Referencing Roommates.com, the court noted that search engines were immune under Section 230 because they provide content in response to a user’s queries, “with no direct encouragement to perform illegal searches.”144 Ultimately, the court found that Google similarly provided a “neutral platform” that did not determine which particular types of content its algorithms would promote.145

Judge Gould pushed back on this finding in yet another separate opinion, arguing that YouTube’s content recommendation system did “develop” and “deliver” a message to ISIS-interested users.146 He argued that YouTube’s algorithm “magnified and amplified” propaganda messages posted by ISIS and its sympathizers “in a way that contributed to the ISIS terrorists’ message beyond what would be done by considering them alone.”147 And he disagreed with the majority’s holding that Section 230 shielded Google from liability for its content-generating algorithm since the algorithm “acted affirmatively” to amplify and direct ISIS content.148 Considering how the algorithm appeared to operate, Judge Gould concluded that, while Google cannot be liable for “the mere content of the posts made by ISIS,” it could be liable for other conduct that went beyond simply hosting the post.149 He challenged the assumption that websites using “neutral” tools, such as algorithms, were generally immunized by Section 230, suggesting that the tools could not be considered “neutral” if the website “knowingly amplifie[d] a message designed to recruit individuals for a criminal purpose” and the dissemination of that message “materially contribute[d] to a centralized cause giving rise to a probability of grave harm.”150

The Supreme Court granted certiorari to review the Ninth Circuit’s ruling and to clarify whether Section 230 protects online platforms when they use recommendation algorithms.151 The Court, however, resolved the case without deciding the core Section 230 issue, finding that the complaint appeared to state “little, if any, plausible claim for relief.”152 It sent Gonzalez back to the Ninth Circuit to consider the complaint in light of the Court’s simultaneous decision in Twitter, Inc. v. Taamneh,153 which clarified the requirements of a claim made under the Justice Against Sponsors of Terrorism Act.154 The question therefore remains unresolved by the Supreme Court.155

But cases continue to emerge. Recently, in August 2024, the Third Circuit held that TikTok was not immune from liability when a ten-year-old child died after attempting the “Blackout Challenge,” prompted by a video found in her “For You Page” (“FYP”).156 TikTok’s algorithm promoted videos of third-parties depicting the challenge to the child’s  uniquely curated FYP.157 The court found that TikTok’s algorithm was not based solely on a user’s online inputs and that it curated and recommended a tailored compilation of videos for a user’s FYP “based on a variety of factors.”158 The court held that the algorithm was TikTok’s own “expressive activity,” and thus its own first-party speech, even though the algorithm organized and presented the third-party speech of others.159

In making this determination, the court relied heavily on the Supreme Court’s decision in Moody v. NetChoice, LLC.160 In Moody, the Court held that a platform’s algorithm that reflects “editorial judgments” about compiling the third-party speech it wants––in the way it wants––is the platform’s own “expressive product” and is protected by the First Amendment.161 The Third Circuit extrapolated this ruling and reasoned that curating compilations of others’ content via expressive algorithms amounts to first-party speech under Section 230 as well.162 Since Section 230 immunizes only information “provided by another,” Anderson’s lawsuit, based on TikTok’s own expressive activity, was not barred.163

Taken together, these decisions reflect a growing reluctance of courts to apply Section 230 as broadly as they have in the past.164 While Section 230 is alive and well, courts are gradually carving out more exceptions to its once-robust immunity.165

4. Attempts at Amendment and Repeal

In recent years, there has been widespread debate over whether Section 230 should be amended or overturned in its entirety, and several pieces of legislation have been introduced in Congress (though none have been enacted).166 Some of the different approaches include broadening carveouts for immunity, clarifying one or more parts of the content governance process, or solely targeting illegal content.167

The SAFE TECH Act, introduced in 2021, would impose limits on Section 230 immunity for content involving civil rights violations, harassment, stalking, wrongful death, or violations of state and federal antitrust laws.168 It would also eliminate immunity for content promoted or amplified through paid advertising.169 The BAD ADS Act, introduced in 2020, would remove Section 230 protections for larger platforms and service providers (defined as those with more than 30 million U.S. users, or 300 million worldwide users, and more than $1.5 billion in annual revenue) that use behavioral targeting to serve advertisements.170 The PACT Act would require platforms “to publish their terms of service or use explaining what types of content are permissible,” while establishing a system for users to flag potentially illegal or violative content.171 Platforms would lose their immunity if they have actual knowledge of illegal content on their service and fail to remove it in a timely manner.172 The Online Freedom and Viewpoint Diversity Act takes a slightly different approach, stripping Section 230 immunity from platforms that lack an adequate justification for actions taken to moderate or restrict content.173

There has also been an attempt at executive action: In 2020, President Trump issued an executive order aimed at narrowing the scope of the immunity enjoyed by digital platforms under Section 230.174 The executive order declared that any platform that edited content, apart from restricting violent or obscene posts, was “engaged in editorial conduct” and may forfeit Section 230 immunity.175 The executive order targeted the “Good Samaritan” clause that protects platforms from liability for moderating objectionable third-party content in good faith.176 President Biden revoked the executive order the following year.177 Brendan Carr, who was appointed to run the Federal Communications Commission in the second Trump administration, has written about the need to “rein[] in Big Tech” and shared his desire to restrict Section 230 immunity.178

As courts attempt to apply Section 230 to modern technologies and lawmakers propose alternatives, much of the conversation has shifted toward the role of algorithms in amplifying content.179 These algorithms, which curate and promote posts based on user data, have come under scrutiny for their potential to spread harmful or misleading information.180 Understanding how these systems operate is essential to interpreting Section 230 and its broader impact on digital content governance.

B. Social Media Recommendation Systems

Recommendation systems are algorithms aimed at suggesting relevant items to users, such as movies to watch, products to buy, and social media posts to view.181 Recent advancements in technology have resulted in an “online data overload problem,” which “complicates the process of finding relevant and useful content over the internet.”182 Recommendation systems filter through this data and reduce the amount of effort and time a user spends looking for suitable information.183

1. An Introduction to Algorithms

Algorithms are “mathematical or logical process[es] consisting of a series of steps,” carefully crafted with specific objectives in mind.184 Recommendation algorithms on social media platforms determine how best to use content on the platform to maximize user engagement.185 Algorithms have enabled internet companies.186 Incorporating machine learning, artificial intelligence, and feedback loops, algorithms have “generally operated with impunity.”187

Two major categories of recommender systems exist: collaborative filtering methods and content-based methods.188 In order to produce new recommendations, collaborative filtering methods rely solely on prior interactions recorded between users and items.189 These past user-item interactions are used to make predictions, and the more users interact, the more accurate the recommendations become.190 An example of this would be a movie recommendation system that suggests movies to a user, based on both similarity to movies the user has liked in the past and movies that similar users liked.191 The advantage to collaborative methods is that they require no information about users or items.192 The drawback, though, is that it can be difficult to recommend anything to new users or users who have few interactions.193

Content-based methods rely on additional information about users and items, such as the age, sex, and occupation of the user, and the category, duration, and creator of the content.194 To generate recommendations, a model is created to explain the observed user-item interactions.195 An example of this would be an app store recommendation system suggesting a shopping app to a user because they recently installed a similar app.196 This approach demands in-depth knowledge of both the user and the content for an accurate recommendation.197 This information can be input by the developer (including through machine learning) or provided by the users themselves.198

2. Modern Applications

Recommendation systems allow social media companies to actively control what users view, “when they view it,” and who they interact with on their platforms.199 Today, most platforms incorporate both content-based and collaborative filtering methods into their recommendation algorithms.200 However, each algorithm differs in scope,201 as some are more “assertively curatorial” than others.202

TikTok’s FYP allows TikTok users to “discover new content,”203 and unlike certain other algorithmically generated newsfeeds, .204 TikTok’s algorithm “serv[es] content to users in which they have not explicitly expressed interest.”205 TikTok users allow the app to “generate content for them, as opposed to other platforms where a user must follow other users to see content.”206 While TikTok has stated that a user’s shares, likes, and follows each play a role in what the algorithm recommends, a Wall Street Journal study found that retention time (how long a user views a particular piece of content) is the most important factor for the algorithm, marking a sharp contrast from other platforms.207

TikTok’s success has prompted competing media platforms, such as Instagram, Snapchat, and YouTube, to attempt to replicate TikTok’s algorithm.208 Instagram’s main feed is now powered by a complex algorithm showing the user content from across the platform, similar to TikTok’s FYP.209 One of the app’s features, Stories, allow users to share everyday moments “through photos and videos that disappear after 24 hours.”210 Like the main feed, Stories is powered by an algorithm. However, the Stories feed only shows the user content from accounts they follow.211 The Stories algorithm assesses the user’s general interest in a person’s post based on their interaction history, but it does not show the user stories from outside accounts.212 Instagram is thus an example of how one platform can rely on multiple algorithms, with some being more “curatorial” than others.213

Reddit’s content sorting algorithms are designed to enhance content discovery by incorporating machine learning.214 The site’s “Best” sorting option analyzes user activity—such as votes, subscriptions, posts, and comments—to predict and display content more aligned with a user’s individual preferences.215 When sorting by “Best,” users receive home feed recommendations, which highlight posts from new communities and provide context for why that particular piece of content was suggested.216 Reddit allows users to turn off home feed recommendations, limiting a user’s feed to communities they have joined.217

Media platforms rely on algorithms for more than just organizing content.218 Algorithms are used to provide users with friend recommendations to connect with others who may share their interests.219 Google used algorithms to strengthen its search engine’s responses to users’ specific queries.220 It is difficult to overstate the prevalence of algorithms in modern consumer technology,221 and this ubiquity has significant implications for Section 230 jurisprudence.222

II. Analysis

A. The Use of Highly Suggestive Algorithms to Push and Promote Certain Types of Content Should Not Be Considered a Traditional Editorial Function

Not all recommendation algorithms are created equal.223 There is a lack of meaningful precedent closely examining the technical differences between algorithms and how they would impact the outcome of a Section 230 immunity claim.224 While some algorithms passively sort content based on chronological order or simple engagement metrics, others actively amplify, promote, or personalize content in ways that could lead to harm.225

The Third Circuit attempted to conduct such an analysis in Anderson, evaluating the technical aspects of TikTok’s algorithm and finding that its output was the company’s own first-party speech.226 In an amicus brief, the Electronic Frontier Foundation and other non-profit organizations argued that recommendations of TikTok videos, “whether implemented by algorithm or otherwise, reflect decisions about how to display third-party content and are thus ‘part of a publisher’s traditional editorial functions.’”227 The amici argued that online platforms’ recommendations of third-party content are protected by Section 230, comparing TikTok’s placement of videos on a user’s FYP to a newspaper’s decision to place an article on page A1.228 This analogy is flawed, since publishers (including newspapers) can still be found liable for incitement or negligent publication if a reader is seriously injured, dies, or suffers damage to their personal property after acting upon or using the content contained in the publication.229 There is no reason to think that the same standard shouldn’t apply to digital publishers.

The Ninth Circuit’s three-part test for when Section 230 immunity applies, first articulated in Barnes v. Yahoo!, Inc.,230 fails to consider this.231 According to the Ninth Circuit, a defendant is entitled to Section 230 immunity when they are “(1) a provider or user of an interactive computer service (2) whom the plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.”232 The Fourth Circuit, in its 1997 Zeran v. American Online, Inc. decision, barred lawsuits “seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content.”233 The internet has, undoubtedly, evolved a great deal since Zeran was decided, yet the standard applied by circuit courts has remained static.234 The capabilities of AOL, such as sending and receiving electronic mail and communicating via bulletin board, are a far cry from the content recommendation algorithms used by social media platforms today,235 which are so highly suggestive that their use should not be likened to “traditional editorial function[s].”236

While the court in Anderson arrived at the correct outcome, some parts of the majority’s reasoning are problematic. Since the algorithm was “TikTok’s own expressive activity,” the court held that Section 230 did not bar Anderson’s claims.237 However, this leaves the door open for platforms seeking immunity under the First Amendment: “Because platforms organize users’ content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech.”238 The question of whether the First Amendment could provide immunity for platforms’ algorithmic output arose out of the Court’s decision in Moody.239 Assuming that the output does constitute speech, there are likely permissible grounds for the government to regulate such speech because algorithms, as commercial speech, would be subject to a lower standard of review.240 However, such an approach would simply lead to more questions without meaningfully addressing the Section 230 problem.

Additionally, while the court briefly considered the inner workings of TikTok’s algorithm, it did not discuss whether or not their holding would apply to less suggestive algorithms—certain algorithms, such as TikTok’s, are much more suggestive than others.241 Features of a platform that may seem less suggestive, such as language translation, search, and notifications, are often powered by algorithms.242 Under Anderson, these features would be found to be the platform’s own “expressive activity,”243 even though they are fundamentally different from TikTok’s recommendations and closer to the type of activity that Section 230 was intended to protect.244 This would be problematic, and it would prevent Section 230 immunity from being applied in the vast majority of cases, even when warranted.

The approach advocated for by Chief Judge Katzmann is preferrable because it leaves the door open for a technical evaluation of the specific algorithm.245 His analysis relied on “‘a careful exegesis’ of the statutory language.”246 In situations where algorithms create and communicate their own message, “that it thinks you, the reader—you, specifically—will like,” first-party speech is generated.247 This first-party speech sometimes invites users “to become part of a unique global community, the creation and maintenance of which goes far beyond . . . traditional editorial functions.”248 Judge Gould offered a similar framework, insisting that algorithms could not be considered “neutral” tools if they “knowingly amplifie[d] a message.”249 Even if the algorithm is neutral on its face, if it acts affirmatively to magnify and direct problematic content, the platform is not entitled to immunity.250 While platforms cannot be held liable for the content of third-party posts, they can be held liable for conduct that goes beyond simply hosting the content.251 When using the search feature, “users affirmatively seek specific information” and receive an algorithmically generated recommendation.252 Feed-based social media displays content not affirmatively sought by the user, and in the case of TikTok’s FYP, recommends posts from others that the algorithm thinks the user will enjoy.253 Under Chief Judge Katzmann’s approach, the output of highly curated feeds such as the FYP would be found to be first-party speech and exempt from Section 230 protection, while more neutral algorithmic applications would still be eligible for immunity.

B. Arguments in Support of an Expansive Interpretation of Section 230 are Misguided

Those who favor a broad interpretation of Section 230 believe that limiting immunity would fundamentally alter the internet.254 One author argues that the Third Circuit’s decision in Anderson is a hard shove towards “the end of . . . the internet as we know it.”255 Others argue that the loss of immunity “would likely threaten current tech company operations and have a chilling effect on innovation.”256 They insist that algorithmic recommendations are necessary for providers to “select and organize content that users will find relevant and engaging.”257

It is true that the amount of content uploaded to social media platforms is increasing at a staggering rate; for example, estimates suggest that 2.45 billion pieces of content are shared on Facebook each day.258 Social media platforms have become dependent on algorithmic recommendation to deliver relevant content to users.259 However, there are ways to sort and deliver content that do not require the use of suggestive algorithms. In its infancy, Facebook presented content in reverse chronological order.260 While Facebook later developed an algorithm to rank content and determine the order in which status updates should be displayed, it was not initially suggestive and did not convey a message.261

Today, the sequence of posts a user sees on most social media platforms is determined by an algorithm.262 However, many platforms still provide users with the ability to sort their feeds in chronological order, even if it is not the default setting.263 Instagram, Facebook, and YouTube make it possible for their users to see the latest content from those they follow or subscribe to without interference from the algorithm; the change is not permanent, however, and algorithmically-targeted advertisements are still interspersed among the posts.264

Even with the large amount of content shared online, chronological ordering remains a viable option to sort and deliver content, and it allows the user to meaningfully interact with the platform.265 The argument that suggestive content recommendation systems are “all but necessary” to sort content is unpersuasive.266 There is no reason why a user’s feed cannot be limited to accounts that the user follows, which would effectively curate the large volume of content uploaded to the platform. Companies are instead interested in using algorithms because they drive engagement and increase the amount of time users spend on the platform, thus raising advertising revenues.267 This justification does not warrant broad immunity under Section 230. The statute’s original intention was to “protect computer Good Samaritans . . . who . . . screen indecency and offensive material for their customers.”268 Those involved in its passage hoped that it would deliver “rich and diverse informational, educational, [and] cultural resources.”269 Concerns about retaining a user base and collecting revenue for advertising were not motivating factors behind its enactment.270

III. Proposal

To determine whether a social media platform is a “publisher,” and to assess the second prong of the Ninth Circuit’s immunity test, courts should conduct a fact-specific inquiry, guided by Chief Judge Katzmann’s and Judge Gould’s analyses. This would consider how the content was shown to the user: In cases where the user searched for the content, or affirmatively sought it out, immunity may be warranted. But in situations where a user was shown content that they did not request, where an algorithm created and conveyed its own message, companies should be held liable for their own speech.271 Recommendation algorithms that are facially “neutral,” in the context of Roommates.com, should not face this level of scrutiny.272

From a textualist perspective, this is the proper approach to apply Section 230 to recommendation algorithms. First, there is little evidence that Congress enacted Section 230 intending to protect modern social media algorithms.273 According to the material contribution standard, a platform receives no immunity if they play a large role in “developing, as opposed to merely publishing, content that leads to illegal activity.”274 Applying this standard to recommendation algorithms, platforms should not receive protection since hyper-personalized content curation is more akin to creation and development than passive publication.275 Further, Section 230(a) states that interactive computer services “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.”276 This could not be further from reality, as algorithmic recommendations give users less control over the information they see.277 The statute indicates that Congress never intended immunity to extend to such tools.278

It is the proper approach from a policy perspective, as well. In recent years, many researchers and public health officials have raised the alarm about the impact of digital media platforms on individual and societal welfare.279 Social media platforms have leveraged human attention spans to maximize profit.280 Studies have shown that these platforms are extremely harmful when consumed in excess.281 Social media companies should not be able to take advantage of an outdated statute to enjoy broad immunity for what takes place on their platforms. The harmful effects that result from major internet platforms’ use of algorithms adversely affect the American population, through “defamation of individuals, groups, and institutions; dissemination of hate speech; and incitements to violence.”282 Some expansion of platform liability would likely reduce these negative externalities.283

One of the less often discussed options for internet immunity reform is for Congress to delegate its policy-making authority to an administrative agency.284 There are several advantages to this: delegating authority to an administrative agency “allows the law to evolve in response to changing circumstances,” allows Congress to “take advantage of agency employees’ expertise,”––since almost no legislators hold technical degrees––and only requires a congressional majority at one point in time.285 However, uncertainty over the authority of federal agencies makes this option less appealing.286 Additionally, delegating to administrative agencies could result in drastic policy shifts every four years, depending on which party controls the White House. Internet companies may struggle to remain compliant with the law, making this a worse option than a traditional amendment.

There are things to keep in mind when thinking about what an amended Section 230 should look like. First, there are political concerns: Democrats want to hold platforms accountable for spreading misinformation, while Republicans fear censorship due to political bias.287 Second, supporters of the statute as currently written want to ensure the protection of a free and open internet, with a “vibrant social networking environment.”288 Finally, civil rights groups, such as the American Civil Liberties Union (“ACLU”), want to protect free speech.289 While these concerns are legitimate, the priority should be reining in platforms causing societal harm through addictive and highly suggestive recommendation systems.

It is crucial that the statute be updated to reflect the modern social media landscape. Any proposal should effectively reduce a platform’s discretion to selectively bar or promote certain speech through the use of algorithms.290 The Biased Algorithm Deterrence Act, introduced in 2019, provides that an owner or operator of a social media service shall be treated as a publisher or speaker of user-generated content (and, therefore, may be held liable) if the service or its algorithm “displays user-generated content in an order that is not chronological.”291 While this would be a step in the right direction, the statute goes too far. Under this definition, a platform’s search result that responds to a user’s query with the most relevant, not the most recent, information could be held liable with respect to the content it retrieved. A stronger bill would provide that the operator of a social media service be treated as a publisher or speaker of user-generated content when the service presents content to the user that they did not actively seek out (through search, for example) and that was not posted by an account followed by the user. The ideal legislative reform would specifically prevent companies from receiving immunity for any third-party content delivered through an intricately curated newsfeed such as TikTok’s FYP. This would discourage platforms from implementing extremely addictive algorithms that curate content from across the entire platform, while protecting less suggestive functions that also rely on algorithms.

Conclusion

Designed to facilitate the growth of the internet and shield internet service providers of the 1990s from liability for third-party content, Section 230 of the Communications Decency Act is insufficiently nuanced to address the realities of modern algorithmically-based social media. Its scope has expanded far beyond its original intent, and circuit courts have inconsistently applied the statute, with recent cases suggesting a shift away from the broad immunity once enjoyed by internet companies.292 Courts should narrow the scope of immunity in cases where platforms employ algorithms that actively shape and promote content. Platforms should be held accountable when their own expressive activity results in the distribution of harmful content. The best interpretation of Section 230 demands as much, and Congress should amend the statute to make that even clearer.


* Notes Editor, Cardozo Law Review (Vol. 47); J.D. Candidate, Benjamin N. Cardozo School of Law (2026); B.A., Binghamton University (2022). I would like to thank my Note Advisor, Professor Michael Pollack, for his thoughtful feedback and guidance throughout the writing process. I am also grateful to Elizabeth Bulat and Julia Ferro for their hard work preparing this Note for publication.