Changeflow GovPing Courts & Legal Commonwealth v. Meta Platforms Inc - Section 23...
Priority review Enforcement Amended Final

Commonwealth v. Meta Platforms Inc - Section 230 Immunity Ruling

Favicon for www.mass.gov Massachusetts SJC New Opinions
Filed
Detected
Email

Summary

The Massachusetts Supreme Judicial Court ruled that Section 230 immunity does not bar Commonwealth claims against Meta Platforms and Instagram alleging unfair business practices (designing platform to induce compulsive use by children), deceptive practices (misleading public about platform safety), and public nuisance. The court held that § 230(c)(1) protects interactive computer service providers only from claims seeking to hold them liable for harms stemming from user-generated content, not from claims based on the company's own conduct in designing and marketing the platform.

What changed

The SJC held that Meta cannot invoke § 230 immunity to dismiss claims alleging the company designed its platform to create compulsive use among minors and made misleading representations about platform safety. The court drew a distinction between claims about user-generated content (protected) and claims about the company's own affirmative conduct in platform design and safety representations (not protected). The 4-3 decision reverses the interlocutory procedural posture and allows the Commonwealth's unfair practices, deceptive practices, and public nuisance claims to proceed in the lower court.

For technology companies and social media platforms, this ruling signals that Section 230 may not provide blanket immunity for platform design choices that target or affect minors, even when the underlying harm involves user interactions. Platforms facing similar claims in other states may cite this decision to argue their claims fall outside § 230 protection. The ruling does not decide the merits of the Commonwealth's claims but clears the path for discovery and trial proceedings on the design and misrepresentation allegations.

What to do next

  1. Monitor for implications on Section 230 immunity doctrine
  2. Assess whether platform design claims remain viable in other jurisdictions

Archived snapshot

Apr 11, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.

Instagram, LLC. 1

Jay M. Wolman for International Center for Law & Economics. Rob Bonta, California Attorney General, Nicklas A. Akers, Assistant California Attorney General, Bernard A. Eskandari, Megan M. O'Neill, & Marissa Roy, Deputy California Attorneys General, Philip J. Weiser, Colorado Attorney General, Shannon Stevenson, Colorado Solicitor General, Krista Batchelder, Deputy Colorado Solicitor General, Danny Rheiner, Assistant Colorado Solicitor General, Kris Mayes, Arizona Attorney General, Reagan Healey, Assistant Arizona Attorney General, William Tong, Connecticut Attorney General, Kathleen Jennings, Delaware Attorney General, Marion M. Quirk, of Delaware, Ryan T. Costa, of Delaware, Brian L. Schwalb, District of Columbia Attorney General, Anne E. Lopez, Hawai'i Attorney General, Theodore E. Rokita, Indiana Attorney General, Russell Coleman, Kentucky Attorney General, Aaron M. Frey, Maine Attorney General, Anthony

  1. Brown, Maryland Attorney General, Philip D. Ziperman, of Maryland, Elizabeth J. Stern, Assistant Maryland Attorney General, Keith Ellison, Minnesota Attorney General, Evan Romanoff, Assistant Minnesota Attorney General, Michael T. Hilgers, Nebraska Attorney General, John M. Formella, New Hampshire Attorney General, Brandon H. Garod, Assistant New Hampshire Attorney General, Matthew J. Platkin, New Jersey Attorney General, Letitia James, New York Attorney General, David W. Sunday, Jr., Pennsylvania Attorney General, Jonathan R. Burns, Deputy Pennsylvania Attorney General, Lourdes Gómez Torres, Puerto Rico Secretary of Justice, Peter F. Neronha, Rhode Island Attorney General, Alan Wilson, South Carolina Attorney General, Jared Q. Libet, Assistant Deputy South Carolina Attorney General, Marty J. Jackley, South Dakota Attorney General, Amanda Miiller, Deputy South Dakota Attorney General, Charity R. Clark, Vermont Attorney General, Nicholas W. Brown, Washington Attorney General, John B. McCuskey, West Virgina Attorney General, Laurel K. Lackey & Abby G. Cunningham, Assistant West Virgina Attorneys General, & Josh Kaul, Wisconsin Attorney General, for California Attorney General & others. Steven P. Lehotsky for NetChoice & another. Caitriona Fitzgerald for Electronic Privacy Information Center & others. Ari Z. Cohn, of Illinois, & John G. Mateus for Foundation for Individual Rights and Expression. Jay M. Wolman for Jane Bambauer & another. Blake C. Stacey, pro se. WENDLANDT, J. The Commonwealth alleges that Meta Platforms, Inc., and Instagram, LLC (collectively, Meta),

engaged in unfair business practices by designing the Instagram platform to induce compulsive use by children, engaged in deceptive business practices by deliberately misleading the public about the safety of the platform, and created a public nuisance by engaging in these unfair and deceptive practices. Meta moved to dismiss the complaint, arguing, inter alia, that § 230 of the Communications Decency Act of 1996 (CDA), 47 U.S.C. § 230 (§ 230), barred the claims. A Superior Court judge denied the motion, and Meta appealed. This case first presents the question whether the doctrine of present execution permits an interlocutory appeal from a Superior Court judge's order denying a motion to dismiss based on a defense under § 230. Concluding that it does, we reach the question whether § 230 bars the Commonwealth's claims. We conclude that it does not. Section 230(c)(1) protects an interactive computer service provider, such as Meta, against claims that "treat[] [it] as the publisher . . . of any information provided by" someone other than Meta. 47 U.S.C. § 230(c)(1). Consistent with the text of the statute, common-law principles of publisher liability, and legislative purpose, we determine that § 230(c)(1) protects an interactive computer service provider against claims that seek to hold it liable for harms stemming from user-generated content it published. Here, accepting as true the allegations of the

complaint and drawing all reasonable inferences in the Commonwealth's favor, the claims do not seek to impose liability on Meta for information provided by third parties. Instead, the claims allege harm stemming from Meta's own conduct either by designing a social media platform that capitalizes on the developmental vulnerabilities of children or by affirmatively misleading consumers about the safety of the Instagram platform. Thus, at least at this preliminary stage of the litigation, Meta has not shown it is entitled to the protection provided by 2

  1. Background. Meta owns and operates the social media 3 platform Instagram, which is used by over 33 million young people, including over 300,000 daily active users in the Commonwealth from the age of thirteen to seventeen. Instagram enables users to post images and videos and interact with other

We acknowledge the amicus briefs submitted by NetChoice 2 and the Chamber of Progress; twenty-five State and territorial Attorneys General; the Foundation for Individual Rights and Expression; the Washington Legal Foundation; Professors Jane Bambauer and Eugene Volokh; the Electronic Privacy Information Center, Common Sense Media, Cybersecurity for Democracy, the Tech Justice Law Project, and legal scholars; TechFreedom; the International Center for Law & Economics; and Blake C. Stacey. We set forth the allegations in the complaint. At the 3 motion to dismiss stage, we take all the allegations as true, drawing every reasonable inference in favor of the plaintiff -- here, the Commonwealth. See Hornibrook v. Richard, 488 Mass. 74, 78 (2021).

users. Users view content through various means, including on the "main feed," which consists of a continuous stream of posts from accounts the user "follows," suggested posts from accounts 4 the user does not follow, and advertisements; the "explore" page, which consists of a continuous stream of content that Meta predicts might be interesting to the user, including advertising content; and the "stories" banner at the top of the home page, which consists of time-limited images and videos posted by accounts that the user follows mixed with advertisements. Meta derives substantially all its revenue from selling advertising opportunities to third parties seeking to target promotional material to likely consumers employing the data Meta collects about users' preferences. Third parties pay Meta per "advertisement impression," which is the number of times an advertisement is on screen for a target audience. This revenue model creates a financial incentive for Meta to increase users' screen time because the more time a user spends on the platform, the more advertisement impressions Meta can sell. As described in greater detail infra, the Commonwealth alleges that in pursuit of maximizing advertising profits, Meta has designed Instagram to induce compulsive use among young

To "follow" an account on Instagram means "to subscribe to 4 the feed of (someone or something)." Merriam-Webster Online Dictionary, https://www.merriam-webster.com/dictionary/follow [https://perma.cc/82LH-MKGM].

users to the detriment of their health and well-being, violating

  1. L. c. 93A, § 2 (count I); knowingly deceived the public about
    the risks of Instagram, depriving the public of the ability to make an informed choice about individual use of the platform in violation of G. L. c. 93A, § 2 (count II); unfairly and deceptively claimed that it excludes users under the age of thirteen who are acutely harmed by Instagram's addictive features in violation of G. L. c. 93A, § 2 (count III); and created a public nuisance of youth addiction to Instagram through the unfair and deceptive business conduct alleged in counts I to III (count IV).

  2. Meta's unfair business practices (count I). In count
    I, the Commonwealth alleges that to increase young users' time on Instagram, Meta unfairly employs a suite of design features that exploit those users' neurological vulnerability to social media addiction in violation of G. L. c. 93A, § 2. These features induce compulsive and excessive platform use by strategically triggering dopamine releases, targeting young 5 users' uniquely sensitive "fear of missing out" (FOMO), and exploiting teens' "novelty seeking" minds. This addiction, in

"Dopamine is a neurotransmitter made in [the] brain" that 5 "plays a role as a 'reward center' and in many body functions, including memory, movement, motivation, mood, [and] attention." Cleveland Clinic, https://my.clevelandclinic.org/health/articles /22581-dopamine [https://perma.cc/VN2J-VMU3].

turn, impairs young users' health and well-being by, inter alia, disrupting sleep patterns, stunting socioemotional development, and causing mental health challenges. The complaint specifically identifies four design features that harm young users.

  1. High volume of notifications. Instagram is designed to
    send a high volume of audiovisual and haptic notifications that alert users to activity on Instagram even when they are not using the platform. Under the default settings, Meta enables approximately forty types of notifications, including notifications that a third party has "liked" the user's posts, 6 requested to follow the user, sent the user a message, added a new "story" (a temporarily viewable post), or uploaded a "reel" (a video post). This barrage of notifications, according to the Commonwealth, is designed to overwhelm young users and compel them repeatedly to reopen Instagram to the detriment of their health and well-being.

  2. Infinite scroll and autoplay. Instagram's "infinite
    scroll" feature delivers a never-ending stream of posts and advertisements to a user's main feed and explore pages, while

In the social media context, to "like" means "to 6 electronically register one's approval of (something, such as an online post or comment) for others to see (as by clicking on an icon designated for that purpose)." Merriam-Webster Online Dictionary, https://www.merriam-webster.com/dictionary/like [https://perma.cc/33FU-U55E].

the "autoplay" feature displays stories and reels in a continuous slideshow format that automatically starts playing the next story or reel as the previous one ends. The Commonwealth maintains that Meta has designed these features to encourage excessive screen time among young users by ensuring an endless flow of content without requiring any affirmative act on the part of a user.

  1. Ephemeral features. Meta designed certain ephemeral
    features, which render some posts viewable for a limited amount of time before vanishing, to recall users to Instagram and extend their platform usage. Specifically, stories are only available to view for twenty-four hours before disappearing from a user's feed, and Instagram "live" streaming videos are only available to view in real time. The Commonwealth contends that Meta is aware that these features induce FOMO among young users and result in problematic and habitual use, harming youth health.

  2. Intermittent variable rewards. The Commonwealth
    asserts that Meta induces young users to prolong their time on the Instagram platform and frequently reopen the platform by deploying two intermittent variable reward (IVR) features, which provide users with a dopamine surge at unexpected intervals. First, Meta delivers notifications on an IVR schedule; specifically, Meta delivers dopamine-generating notifications

such as "someone 'liked' your post" (the reward) on a variable schedule alongside neutral notifications. Because IVRs are unpredictable and users desire the psychologically pleasing dopamine release of a rewarding notification, this IVR schedule keeps young users returning to the platform habitually. Second, the refresh function on a user's main feed is an IVR feature akin to a slot machine as it links a user's act -- a swipe -- with a variable reward -- a new post which the user cannot predict. Meta has heightened the anticipation of this type of reward by deliberately manufacturing a delay between the moment the user swipes to refresh his or her feed and the display of new content.

  1. Meta's deceptive business practices (count II). In count II, the Commonwealth alleges that Meta deceptively represented to the public that its platform is safe and nonaddictive, and that the company prioritizes users' well- being. At the same time that Meta was publicly assuring lawmakers, investors, users, and families that Instagram is safe for youth, however, it was prioritizing features that it knew from company studies induced social media addiction and harmed young users' health. The complaint identifies specific public statements by Meta executives, including Meta's chief executive officer, Meta's global head of safety, and the head of

Instagram, as deliberately misleading and belied by internal data and communications.

  1. Meta's deceptive and unfair business practices (count
    III). In count III, the Commonwealth raises both a deceptive and unfair business practices claim pursuant to G. L. c. 93A, § 2, pertaining to users less than thirteen years old (underage users). First, it alleges that Meta engaged in deceptive business practices by publicly stating that it excludes underage users from accessing the platform (age-gating) despite knowing that hundreds of thousands of underage users possess Instagram accounts and that Instagram's age-gating measures are ineffective. Second, it alleges that Meta engaged in unfair business practices by refusing to invest in effective age-gating mechanisms all the while knowing that Instagram's addiction inducing features, discussed supra in connection with count I, cause acute harm to underage users.

  2. Public nuisance (count IV). Count IV alleges that by
    engaging in the unfair and deceptive practices alleged in counts I to III, Meta has "knowingly created, substantially contributed to, and/or [was a] substantial participant[] in maintaining a public nuisance of youth addiction" to Instagram, which has significantly interfered with the public health and safety of youth who spend excessive amounts of time on the platform.

  3. Motion to dismiss. Meta moved to dismiss the complaint
    under Mass. R. Civ. P. 12 (b) (6), 365 Mass. 754 (1974), arguing, inter alia, that the claims are barred by § 230(c)(1). A Superior Court judge denied Meta's motion; relevant to the present appeal, the judge concluded that § 230(c)(1) did not bar the claims. The judge reasoned that the unfair business practices claims in counts I and III seek to hold Meta liable for its own business conduct as a product designer, not for the information provided by third parties on the platform; the harm alleged in the complaint flows from the design features themselves, not from any specific third-party content or from design choices that contributed to the development or creation of any harmful third-party content. The judge also determined that § 230(c)(1) does not provide immunity because the deception claims in counts II and III are based on Meta's own speech. The judge did not address directly the applicability of CDA immunity to count IV but implicitly concluded that it did not bar the public nuisance claim in denying Meta's motion to dismiss. Meta filed a timely notice of appeal, contending that its appeal was proper under the doctrine of present execution on the ground that § 230(c)(1) provides Meta immunity from suit. We accepted 7

Meta also petitioned a single justice of the Appeals Court 7 for leave to bring an interlocutory appeal from the Superior Court's ruling pursuant to G. L. c. 231, § 118; Meta sought leave to appeal all the issues it raised in its motion to

Meta's application for direct appellate review, limited to the issue whether Meta's claimed protection under § 230 is immediately appealable pursuant to the doctrine of present execution and, if so, whether the Commonwealth's claims are barred by § 230.

  1. Statutory framework. Enacted as part of broader reforms set forth in the Telecommunications Act of 1996, Pub. L. No. 104-104, 110 Stat. 56 (1996), § 230 evinces Congress's intent "to promote the free exchange of information and ideas over the Internet." Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1122 (9th Cir. 2003). Congress codified this intent in the statute itself, setting forth a Federal policy "to promote the continued development of the Internet and other interactive computer services and other interactive media," 47 U.S.C. § 230(b)(1), and "to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation," 47 U.S.C. § 230(b)(2). As discussed in greater detail infra, Congress was particularly concerned that interactive computer service providers, faced with the specter

dismiss. We transferred that petition to this court pursuant to our authority under G. L. c. 211, § 4A, consolidated it with the pending appeal, and denied Meta's request for leave to appeal issues other than the question whether the doctrine of present execution permits an interlocutory appeal from an order denying a motion to dismiss based on a defense under § 230.

of tort liability stemming from information provided by millions of third-party users, would limit speech on their platforms. See Zeran v. America Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997), cert. denied, 524 U.S. 937 (1998) ("Congress recognized the threat that tort-based lawsuits pose to freedom of speech" and enacted § 230, "in part, to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum"). 8 Section 230 also evidences Congress's intent "to encourage voluntary monitoring for offensive or obscene material." Carafano, 339 F.3d at 1122. Congress codified this intent, setting forth a Federal policy "to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services" and to "remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material." 47 U.S.C. § 230(b)(3)-(4). The title of § 230 -- "Protection for private blocking and screening of offensive

See Zeran, 129 F.3d at 331 (Congress chose to immunize 8 providers in part because "[f]aced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted").

material" -- similarly alludes to the goal of encouraging voluntary content moderation. As discussed infra, Congress was concerned that in the absence of such protections, service providers would cease their own efforts to create tools to remove inappropriate content in order to avoid liability. Put differently, "[i]n passing [§] 230, Congress sought to spare interactive computer services this grim choice [between doing nothing to curb harmful content or facing liability for their efforts] by allowing them to perform some editing on user- generated content without thereby becoming liable for all defamatory or otherwise unlawful messages that they didn't edit or delete." Fair Hous. Council of San Fernando Valley v. 9 Roommates.com, LLC, 521 F.3d 1157, 1163 (9th Cir. 2008). Further, seeking to empower and protect individual users, Congress expressly affirmed its commitment to curb "obscenity, stalking, and harassment by means of computer" through enforcement of criminal laws to punish directly those who engage in such unlawful activities online. 47 U.S.C. § 230(b)(5). In short, "Congress made a policy choice . . . not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for other

See Zeran, 129 F.3d at 331 ("Fearing that the specter of 9 liability would therefore deter service providers from blocking and screening offensive material, Congress enacted § 230[] . . .").

parties' potentially injurious messages." Zeran, 129 F.3d at 330-331. To effect these aims, § 230 protects certain conduct by interactive computer service providers. First, "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." 47 U.S.C. § 230(c)(1). Second, "[n]o provider or user of an interactive computer service shall be held liable on account" of certain good faith efforts to restrict access to objectionable material. 47 U.S.C. § 230(c)(2). At the same time, § 230 imposes certain affirmative obligations on interactive computer service providers, including a requirement that they notify their users about commercially available parental control protections. See 47 U.S.C. § 230(d). Section 230 also specifically addresses the effect of the statute on State law claims. It provides: "Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section." 47 U.S.C. § 230(e)(3). Together these provisions protect interactive computer service providers against claims that seek to hold them liable for certain decisions to disseminate or restrict information on their platforms.

  1. Doctrine of present execution. With this statutory framework in mind, we turn to the question whether the doctrine of present execution permits Meta's interlocutory appeal from the order denying its motion to dismiss insofar as it concerned Meta's affirmative defense under § 230(c)(1). We address this legal question de novo. See CP 200 State, LLC v. CIEE, Inc., 488 Mass. 847, 848 (2022). "Generally, a litigant is entitled to appellate review only of a final judgment, not of an interlocutory ruling . . . ." Lynch v. Crawford, 483 Mass. 631, 634 (2019). "However, in narrowly limited circumstances, where an interlocutory order will interfere with rights in a way that cannot be remedied on appeal from a final judgment, and where the order is collateral to the underlying dispute in the case and therefore will not be decided at trial, a party may obtain full appellate review of an interlocutory order under our doctrine of present execution" (quotations and citation omitted). Patel v. Martin, 481 Mass. 29, 32 (2018). "The doctrine is intended to be invoked narrowly to avoid piecemeal appeals from interlocutory decisions that will delay the resolution of the trial court case, increase the over-all cost of the litigation, and burden our appellate courts." Id. In civil cases, one of the limited circumstances in which the doctrine of present execution applies is where a party

claims a statutory right to immunity from suit. See Estate of Moulton v. Puopolo, 467 Mass. 478, 485 (2014) (present execution doctrine applies where "protection from the burden of litigation and trial is precisely the right to which [a party] asserts an entitlement"). We allow an interlocutory appeal in such a circumstance because "[w]here a party claims immunity from suit but does not prevail on a motion to dismiss or for summary judgment, the party cannot completely vindicate his or her rights on appeal from a final judgment"; "the party would already then have defended the case at trial -- exactly what immunity from suit was 'designed to prevent.'" Lynch, 483 Mass. at 634, quoting Patel, 481 Mass. at 33. See Kent v. Commonwealth, 437 Mass. 312, 316 (2002) ("right to immunity from suit is effectively lost as litigation proceeds past motion practice and is not adequately vindicated if an order denying it were not appealable until the close of litigation" [quotations and citations omitted]). By contrast, when a statute confers immunity from liability, the present execution doctrine does not apply, as the right may be vindicated fully on appeal after trial. See Lynch, supra at 634-635. In determining whether statutory immunity is from suit altogether or a more limited immunity from liability, we construe the statute

"according to the intent of the Legislature ascertained from all its words construed by the ordinary and approved usage of the language, considered in connection with the cause of its enactment, the mischief or imperfection to be remedied and the main object to be accomplished, to the end that the purpose of its framers may be effectuated." Patel v. 7-Eleven, Inc., 489 Mass. 356, 362-363 (2022), S.C., 494 Mass. 562 (2024), quoting Harvard Crimson, Inc. v. President & Fellows of Harvard College, 445 Mass. 745, 749 (2006). As discussed supra, § 230(e)(3) states that "no cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section" (emphases added). 47 U.S.C. § 230(e)(3). The plain meaning of "no cause of action may be brought" is that a suit may not be initiated in the first instance and the defendant cannot be forced to litigate the claim. See Black's Law Dictionary 275 (12th ed. 2024) (defining "cause of action" as "a factual situation that entitles one person to obtain a remedy in court from another person"). This construction is bolstered by the language in § 230(e)(3) "and no liability may be imposed," which plainly confers immunity from liability as an additional protection, separate from the immunity conferred by the phrase "no cause of action may be brought." See Commonwealth v. Fleury, 489 Mass. 421, 427 (2022) (observing "the fundamental and long-standing principle of statutory interpretation that we must strive to give effect to each word of a statute so that no

part will be inoperative or superfluous" [quotation and citation omitted]). Thus, we join those courts that have concluded 10 11 that § 230(e)(3) provides an interactive computer service provider immunity from suit against a State law claim that conflicts with the Federal statute. Accordingly, the doctrine 12

For this reason, we reject the Commonwealth's 10 construction of the phrase "no cause of action may be brought" in § 230(e)(3) as simply restating the immunity from liability conferred in the phrase "and no liability may be imposed." See

  1. Scalia & B.A. Garner, Reading Law: The Interpretation of Legal Texts 174 (2012) ("If possible, every word and every provision is to be given effect . . . . None should be ignored. None should needlessly be given an interpretation that causes it to duplicate another provision or to have no consequence"). See Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 11 591 F.3d 250, 255 (4th Cir. 2009), quoting Roommates.com, LLC, 521 F.3d at 1175 ("We thus aim to resolve the question of § 230 immunity at the earliest possible stage of the case because that immunity protects websites not only from 'ultimate liability,' but also from 'having to fight costly and protracted legal battles'"); Carafano, 339 F.3d at 1125 ("Congress intended that service providers . . . be afforded immunity from suit"); Zeran, 129 F.3d at 330 ("By its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service"); Hassell v. Bird, 5 Cal. 5th 522, 544 (2018), cert. denied, 586 U.S. 1126 (2019) (plain language of § 230 "conveys an intent to shield Internet intermediaries from the burdens associated with defending against state-law claims that treat them as the publisher or speaker of third party content"); Banaian v. Bascom, 175 N.H. 151, 158 (2022) ("plain language [of § 230(e)(3)] confers immunity from suit"); In re Facebook, Inc., 625 S.W.3d 80, 87 (Tex. 2021), cert. denied sub nom. Doe v. Facebook, Inc., 142 S. Ct. 1087 (2022) (plain language of § 230 "create[s] a substantive right to be free of litigation, not just a right to be free of liability at the end of litigation"). We decline to follow General Steel Dom. Sales, L.L.C. v. 12 Chumley, 840 F.3d 1178, 1182 (10th Cir. 2016), which addressed

of present execution permits Meta's appeal as it pertains to Meta's defense under § 230. 13

  1. Section 230(c)(1). Having concluded that the appeal is properly before us, we next address Meta's contention that § 230(c)(1) bars the Commonwealth's claims, an issue of statutory construction that we review de novo. See Garcia v. Steele, 492 Mass. 322, 326 (2023). Section 230(c)(1) provides: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." 47 U.S.C.

To fall within the protection of § 230(c)(1), a defendant thus must show that it is a provider or user of an "interactive computer service"; that the claim would treat the defendant as 14

whether interlocutory review was available under the Federal collateral order doctrine. The court stated, without explanation, that § 230(e)(3) "does not contain an explicit bar to suit" and is "merely a preemption provision." Id. In view of the interlocutory nature of the present appeal 13 and consistent with the limited scope of the doctrine of present execution, we decline to reach Meta's additional bases for dismissal of the complaint. See CP 200 State, LLC, 488 Mass. at 849 ("The doctrine of present execution is a long-standing exception to [the rule that there is no right to appeal from an interlocutory order unless a statute or rule authorizes it], applicable in limited circumstances"). "The term 'interactive computer service' means any 14 information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that

the publisher or speaker of information; and that the information was provided by another "information content provider." See Calise v. Meta Platforms, Inc., 103 F.4th 732, 15 738 (9th Cir. 2024), citing Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100-1101 (9th Cir. 2009). See also Massachusetts Port Auth. v. Turo Inc., 487 Mass. 235, 240 (2021) (Turo), quoting Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 19 (1st Cir. 2016), cert. denied, 580 U.S. 1083 (2017) (Backpage.com) (§ 230[c][1] immunity "applies when 'the defendant [1] is a provider or user of an interactive computer service; [2] the claim is based on information provided by another information content provider; and [3] the claim would treat the defendant as the publisher or speaker of that information'"). Courts have construed § 230(c)(1) "to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service." Turo, 487 Mass. at 240, quoting Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102, 1118 (9th Cir.), cert.

provides access to the Internet and such systems operated or services offered by libraries or educational institutions." 47 U.S.C. § 230(f)(2). "The term 'information content provider' means any person 15 or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service." 47 U.S.C. § 230(f)(3).

denied, 552 U.S. 1062 (2007). Immunity thus does not depend on the form of the asserted cause of action; regardless of how a claim is pleaded, immunity applies so long as the claim treats an interactive computer service provider as the publisher or speaker of third-party generated content. See, e.g., Estate of Bride v. YOLO Techs., Inc., 112 F.4th 1168, 1178-1179 (9th Cir. 2024), cert. denied, 145 S. Ct. 1435 (2025) (YOLO) (misrepresentation and products liability claims barred by § 230[c][1]); Calise, 103 F.4th at 743-744 (unjust enrichment, negligence, and unfair competition law claims barred by § 230[c][1]); Jones v. Dirty World Entertainment Recordings LLC, 755 F.3d 398, 402, 417 (6th Cir. 2014) (defamation, libel per se, false light, and intentional infliction of emotional distress claims barred by § 230[c][1]). Nevertheless, § 230(c)(1) "immunity is not limitless." Calise, 103 F.4th at 736. See Roommates.com, LLC, 521 F.3d at 1164 (CDA "was not meant to create a lawless no-man's-land on the Internet"). "Protection under § 230(c)(1) extends only to bar certain claims imposing liability for specific information that another party provided." Henderson v. Source for Pub. Data, L.P., 53 F.4th 110, 117 (4th Cir. 2022). Instead, the three requirements of § 230(c)(1) look first to the defendant's status as an "interactive computer service" provider, then to the kind of claim the plaintiff has brought (does it treat the

defendant as a publisher of information), and finally to the source of the information underlying the plaintiff's claim (who provided the information). See id. at 119-120. The parties agree that the first requirement is met: Meta is an interactive computer service provider. Their dispute centers on the other two requirements for § 230(c)(1) immunity. 16

  1. Treated as the publisher. i. Plain meaning. The
    parties' arguments regarding the applicability of § 230(c)(1) immunity appear to rest on competing constructions of the phrase "treated as the publisher . . . of any information." 47 U.S.C. § 230(c)(1). Section 230 does not define the phrase. Accordingly, we "begin[] with the principal source of insight into legislative intent -- the plain language of the statute" (quotations and citation omitted). Patel, 489 Mass. at 362.

  2. Business of publication. Applying the plain meaning of
    "publisher" as "one that makes public," "the reproducer of a work intended for public consumption," and "one whose business is publishing," some Federal courts have concluded, as Meta 17 urges us to do, that the phrase "treated as the publisher . . .

For purposes of our analysis of Meta's arguments in the 16 present case, we address the third requirement of the test as articulated in Turo, 487 Mass. at 240, before the second. See Webster's Third New International Dictionary 1837 17 (1981); Webster's Third New International Dictionary 1837 (1986).

of any information" extends to bar claims that implicate a provider's role as a publisher, including its editorial choices concerning whether, how, when, for how long, and to whom to publish information. See Force v. Facebook, Inc., 934 F.3d 53, 65 (2d Cir. 2019), cert. denied, 590 U.S. 942 (2020) (citing dictionary definitions, see note 17, supra). After all, such 18 editorial choices are part and parcel of the business of publication. Thus, Meta presses, where a claim implicates 19

We note that the courts in Force, 934 F.3d at 65; Federal 18 Trade Comm'n v. LeadClick Media, LLC, 838 F.3d 158, 175 (2d Cir. 2016); and Barnes, 570 F.3d at 1102, stated that Webster's Third New International Dictionary 1837 (1986) defined "publisher" as "one whose business is publication" rather than "one whose business is publishing." For purposes of this opinion, we treat "publication" and "publishing" as synonymous. See Klayman v. Zuckerberg, 753 F.3d 1354, 1359 (D.C. Cir. 19

  1. ("the very essence of publishing is making the decision whether to print or retract a given piece of content"); Barnes, 570 F.3d at 1102 ("publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content"); Doe v. MySpace, Inc., 528 F.3d 413, 418 (5th Cir.), cert. denied., 555 U.S. 1031 (2008) (§ 230 bars "all cases arising from the publication of user-generated content"); Roommates.com, LLC, 521 F.3d at 1170-1171 ("any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under [§] 230"); Universal Communication Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 422 (1st Cir. 2007) (Lycos) (provider's "construct and operation" of websites, which contributed to proliferation of misinformation by third parties regarding plaintiff's company in violation of securities and cyberstalking laws, were "as much . . . editorial decision[s] with respect to that misinformation as a decision not to delete a particular posting"); Green v. America Online, 318 F.3d 465, 471 (3d Cir.), cert. denied, 540 U.S. 877 (2003) ("decisions relating to the monitoring, screening, and deletion of content from [the provider's] network -- actions quintessentially related to a publisher's role");

traditional publishing activities such as a provider's editorial choices, the claim "treat[s]" the provider "as the publisher . . . of any information" under the plain meaning of that phrase. This construction, which has a commonsense appeal, finds support in the language from some Federal court decisions, which broadly describe the immunity provided by § 230(c)(1) in analyzing claims challenging design features and algorithms that determine, inter alia, how and to whom service providers disseminate information. See, e.g., Force, 934 F.3d at 65 (claims seeking to hold social media company liable for "giving Hamas a forum with which to communicate and for actively bringing Hamas'[s] message to interested parties," including claim challenging algorithm designed to show content aligned with user's interests, fall within "heartland of what it means to be the 'publisher' of information" under § 230); Backpage.com, 817 F.3d at 20-21 (sex trafficking claims challenging website's posting policies, such as anonymization, concern features that "reflect choices about what content can

Zeran, 129 F.3d at 332, quoting W.L. Prosser & W.P. Keeton, Torts § 113, at 803 (5th ed. 1984) (publishers include "[t]hose who are in the business of making their facilities available to disseminate . . . the information gathered by others"); Zeran, supra at 330 ("deciding whether to publish, withdraw, postpone or alter content" are examples of "a publisher's traditional editorial functions").

appear on the website and in what form, are editorial choices that fall within the purview of traditional publisher functions," and are barred by § 230).

  1. Common-law publisher liability. Some Federal courts
    have taken a slightly different approach to determining the plain meaning of the phrase "treated as the publisher . . . of any information," noting that the phrase has deep roots in the common law of information-based torts such as defamation. This approach, advocated by the Commonwealth, also has some force; where, as here, "Congress uses a common-law term, we must assume, absent a contrary indication, that it intends the common-law meaning." Ajemian v. Yahoo!, Inc., 478 Mass. 169, 176 (2017), cert. denied, 584 U.S. 910 (2018). See Neder v. United States, 527 U.S. 1, 23 (1999) (when Congress uses "common-law terms," it intends to incorporate their "well- settled meaning[s]"); 2B N.J. Singer & J.D. Shambie Singer, Statutes and Statutory Construction § 50:1, at 148 (7th ed. rev.

  2. ("Where an operative word is not defined in a statute, the
    common law meaning controls"); id. at § 50:3, at 161-162 ("The meaning of well-defined common law words and phrases often carries over to statutes dealing with the same or a similar subject matter"). 20

See also George v. McDonough, 596 U.S. 740, 752 (2022) 20 ("The real question is not what [a statutory term might mean] in

Relevant here, a claim treats a defendant as a publisher of information at common law where the claim (a) makes the defendant liable for intentionally or negligently publishing information to someone other than the subject of the information (dissemination element) and (b) seeks to impose liability based on the content of the information published (content element). See Henderson, 53 F.4th at 121-122. See, e.g., W.L. Prosser & W.P. Keeton, Torts § 113, at 803 (5th ed. 1984) (those who disseminate information gathered by others may be "subject to liability for the defamatory matter that was published" [emphases added]); Restatement (Second) of Torts § 558, at 155 (1977) ("To create liability for defamation there must be . . a false and defamatory statement concerning another" and, inter alia, an "unprivileged publication to a third party" [emphasis added]); Restatement (Second) of Torts § 652E, at 396 (1977) (publisher may be held liable for tort of false light only when published statement was of improper nature -- that is, when statement "is such a major misrepresentation of [a subject's] character, history, activities or beliefs that serious offense may reasonably be expected to be taken by a reasonable man in

the abstract, but what was the 'prevailing understanding' of this term of art 'under the law that Congress looked to when codifying' it"); Stokeling v. United States, 586 U.S. 73, 80 (2019) ("[I]f a word is obviously transplanted from another legal source, whether the common law or other legislation, it brings the old soil with it" [citation omitted]).

his position"); Restatement (Second) of Torts, § 652D, at 383 (1977) (tort of publicity given to private life imposed liability on publisher for disseminating information "highly offensive to reasonable person"). See also Calise, 103 F.4th at 738, quoting Restatement (Second) of Torts § 577 comment a (1938) (observing that "publisher" as used in § 230[c][1] "has a well-defined meaning at common law" with "publication" defined as "[a]ny act by which [unlawful] matter is intentionally or negligently communicated to a third person"); Henderson, supra at 122 ("to hold someone liable as a publisher at common law was to hold them responsible for the content's improper character" [emphasis added]). In essence, common-law publisher liability is a form of vicarious liability, holding a publisher liable as an "intermediar[y]" of harmful content "originating with a third party" that the publisher disseminated to others. Zeran, 129 F.3d at 330-331. Informed by this common-law understanding, some Federal courts have construed the phrase "treated as the publisher . . . of any information" in § 230(c)(1) to require not only the dissemination element but also the content element -- that is, that the claim "impos[e] liability for specific information that another party provided." Henderson, 53 F.4th at 117. See, e.g., M.P. v. Meta Platforms Inc., 127 F.4th 516, 525, 530 (4th Cir.), cert. denied, 146 S. Ct. 287 (2025) (Rushing, J.,

concurring) (§ 230 immunity barred plaintiff's claims that provider "allows racist, harmful content to appear on its platform" because "[t]hough framed in products liability verbiage [the plaintiff's] claims undoubtedly seek to hold Facebook liable for disseminating on its platform improper content provided by others" [emphases added]); id. at 525 ("Crucially, [the plaintiff] cannot show that [the provider's] algorithm was designed in a manner that was unreasonably dangerous for viewers' use without also demonstrating that the algorithm prioritizes the dissemination of one type of content over another" [emphasis added]). Courts have observed that in enacting § 230(c)(1), Congress intended to preserve liability against the original author of the harmful message but to eliminate "the separate route of imposing tort liability on companies that serve as intermediaries for other parties' potentially injurious messages." Zeran, 129 F.3d at 330-331. See Jones, 755 F.3d at 21

To be sure, the United States Court of Appeals for the 21 Fourth Circuit in Zeran also stated, "[L]awsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions -- such as deciding whether to publish, withdraw, postpone or alter content -- are barred." Zeran, 129 F.3d at 330. See Jones, 755 F.3d at 407 (quoting same language from Zeran). However, the claim at issue sought to hold the provider liable for harms flowing from defamatory messages posted by a third person. Zeran, supra at 328-329 (plaintiff claimed harm from defamatory third-party posts on provider's bulletin board stating plaintiff offered for sale offensive and tasteless slogans related to Oklahoma City Federal

407 ("Section 230 marks a departure from the common-law rule that allocates liability to publishers . . . of tortious material written or prepared by others" [emphasis added]). Accord Doe No. 14 v. Internet Brands, Inc., 824 F.3d 846, 852 (9th Cir. 2016) (Internet Brands) (no § 230[c][1] immunity where cause of action, failure to warn of rape scheme known to provider through sources other than its platform, did "not seek to impose 'intermediary' liability"; "there [was] no allegation that [the provider] transmitted any potentially harmful messages

building bombing). See Jones, supra at 403-404 (plaintiff claimed harm from photographs and derogatory comments posted on website). The Fourth Circuit later clarified that § 230(c)(1) applies only where the claim seeks to hold the publisher liable for the harmful nature of the content published. See Henderson, 53 F.4th at 122. Similarly, in Turo, we acknowledged that the United States Court of Appeals for the First Circuit "has held that § 230 protects a service provider from liability for traditional editorial functions as well as for the provider's website construction and operation." Turo, 487 Mass. at 245 n.3, citing Hiam v. HomeAway.com, Inc., 267 F. Supp. 3d 338, 346 (D. Mass. 2017), aff'd, 887 F.3d 542 (1st Cir. 2018), Backpage.com, 817 F.3d at 21, and Lycos, 478 F.3d at 422. Notably, and consistent with the common-law reading of § 230(c)(1), in each of the cases we cited, the harm alleged flowed from specific third-party content. See Hiam, supra at 348, 354 (immunity applied to claims that sought to hold provider "liable for misleading or inaccurate material" included in third-party post listing rental property); Backpage.com, supra at 19-20 (immunity applied where minor plaintiffs sought to hold provider liable for having been sex trafficked by means of third-party posts advertising minors' availability as escorts); Lycos, supra at 415 (immunity applied to claims "objecting to a series of allegedly false and defamatory postings made under pseudonymous screen names on an Internet message board operated by [the defendant provider]"). See also note 27, infra.

between [the plaintiff and the rapists]" [emphasis added]); Backpage.com, 817 F.3d at 19, quoting Zeran, 129 F.3d at 331 ("websites that display third-party content may have an infinite number of users generating an enormous amount of potentially harmful content, and holding website operators liable for that content 'would have an obvious chilling effect' in light of the difficulty of screening posts for potential issues" [emphasis added]); Delfino v. Agilent Techs., Inc., 145 Cal. App. 4th 790, 802-803 (2006), cert. denied, 552 U.S. 817 (2007) (noting policy of § 230 is to "avoid the chilling effect upon Internet free speech that would be occasioned by the imposition of tort liability upon companies that do not create potentially harmful messages but are simply intermediaries for their delivery"). These courts have rejected the argument that § 230(c)(1) provides immunity from liability "anytime there is a 'but-for' causal relationship between the act of publication and liability," as it "bears little relation to publisher liability at common law." Henderson, 53 F.4th at 122. Engaging in traditional publishing activity "alone is not enough." Id. See Erie Ins. Co. v. Amazon.com, Inc., 925 F.3d 135, 139-140 (4th Cir. 2019) (although service provider published third-party speech in marketing defective product, § 230[c][1] did not bar claim against provider for selling defective product because claim was not based on "the content of [that] speech"). Accord

HomeAway.com, Inc. v. Santa Monica, 918 F.3d 676, 682 (9th Cir.

  1. (rejecting "use of a 'but-for' test that would provide immunity under the CDA solely because a cause of action would not otherwise have accrued but for the third-party content"); Internet Brands, 824 F.3d at 852-853 ("[p]ublishing activity is a but-for cause of just about everything [the interactive computer service provider] is involved in," yet "Congress has not provided an all purpose get-out-of-jail-free card for businesses that publish user content on the internet"); Airbnb, Inc. v. Boston, 386 F. Supp. 3d 113, 120 n.5 (D. Mass. 2019) (rejecting argument that United States Court of Appeals for First Circuit has construed § 230 to "expansively protect[] all decisions a company makes that in any way implicate the overall design and operation of its online platform"). Instead, "treated as the publisher of . . . any information" requires both the dissemination element and the content element. See Henderson, 53 F.4th at 123. This common-law construction is further supported by § 230 as a whole. See S&H Indep. Premium Brands E., LLC v. Alcoholic Beverages Control Comm'n, 494 Mass. 464, 467 (2024) ("We do not read statutory language in isolation; instead, we must look to the statutory scheme as a whole to adduce legislative intent" [quotation and citation omitted]). As discussed supra, § 230(c)(1) immunity requires that "another information content

provider" supply the relevant information upon which liability is asserted (emphasis added). 47 U.S.C. § 230(c)(1). More than one person or entity, including the interactive computer service provider itself, may be the information content provider where they each contribute to the creation or development of the information. See note 15, supra (setting forth definition of "information content provider"); Roommates.com, LLC, 521 F.3d. at 1165 ("the fact that users are information content providers does not preclude [the interactive computer service provider] from also being an information content provider by helping 'develop' at least 'in part' the information in the profiles"). Thus, courts have determined that a service provider is an "information content provider" where it directly and materially contributes to "what [makes] the content itself 'unlawful'" (emphasis added). Henderson, 53 F.4th at 127, quoting Force, 934 F.3d at 68. Where the provider so contributes to the unlawful content, it is not protected by § 230(c)(1). See 22

Compare LeadClick Media, LLC, 838 F.3d at 176 (defendant 22 not entitled to § 230[c][1] immunity because it was "an information content provider with respect to the deceptive content at issue"), and Roommates.com, LLC, 521 F.3d at 1166 (no immunity where website provider created questions and suggested answers, which in turn materially contributed to eliciting information used to create "part of the [user] profile that is alleged to offend the Fair Housing Act and state housing discrimination laws"), with Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269-1270 (9th Cir. 2016) (provider did not create or develop content where website did "absolutely nothing to enhance the defamatory sting of the message beyond the words offered by the

Henderson, supra at 128 ("where a company materially contributes to a message's unlawful content, that company stops being a mere 'intermediary' for another party's message" and loses immunity); Roommates.com, LLC, 521 F.3d at 1168 ("a website helps to develop unlawful content, and thus falls within the exception to [§] 230, if it contributes materially to the alleged illegality of the content"). This construction of "information content provider" as any person or entity that materially contributes to the unlawful nature of the information published bolsters the construction of "treated as the publisher . . . of any information" to include a content element; where the service provider created, in whole or in part, the content from which the alleged harm flows, it is not protected by § 230(c)(1).

  1. Legislative history. Where, as here, the plain meaning of the statute lends itself to competing constructions, we turn to legislative history to gain further insight into legislative intent. See Matter of the Estate of Mason, 493 Mass. 148, 152 (2023) ("where statutory language is not

user" [quotation and citation omitted]), Jones, 755 F.3d at 415- 416 (platform not information content provider where it selected statements at issue for publication but "did not materially contribute to the illegality of those statements"), and Nemet Chevrolet, Ltd., 591 F.3d at 257 (defendant who "structured its website and its business operations to develop information related to class-action law suits" was not information content provider, as "there is nothing unlawful about developing this type of content").

conclusive, we may turn to extrinsic sources, including legislative history . . . for assistance in our interpretation" [quotation and citation omitted]). Significantly, § 230 was passed in the wake of two decisions, Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991) (CompuServe), and Stratton Oakmont, Inc. v. Prodigy Servs. Co., 23 Media L. Rep. 1794 (N.Y. Sup. Ct. May 25,

  1. (Prodigy), each concerning the liability of an interactive computer service provider for false and defamatory statements made by a third-party user on its platform. In CompuServe, the plaintiff claimed it had been harmed by false and defamatory statements that were included in an online newsletter and published in CompuServe's electronic library, which consisted of a vast number of publications. CompuServe, supra at 137-138. Acknowledging that under the common law an entity who repeats or republishes defamatory material ordinarily is subject to liability as if it had originally published it, the Federal District Court first concluded that CompuServe was a common-law distributor akin to a library, rather than a traditional publisher, because it exercised "little or no editorial control over" the newsletter's contents; once the third party uploaded the newsletter into CompuServe's data banks, it became immediately available to subscribers. Id. at 139-140. The court reasoned that treating CompuServe as a common-law

publisher liable for the defamatory statements of others "would impose an undue burden on the free flow of information." Id. at

  1. Instead, the court concluded that CompuServe could only be held liable if it knew or had reason to know of the allegedly defamatory information; because the plaintiffs had failed to adduce evidence of such knowledge, the court allowed summary judgment in CompuServe's favor. Id. at 141. While CompuServe was spared liability, the decision left open the prospect that a service provider might be held liable if it was on actual or constructive notice of the unlawful content published on its platform. See id. 23 In the second case, Prodigy, 23 Media L. Rep. at 1796-1797, the State trial court also considered a defamation claim against an interactive computer service provider; this time, however, the provider had undertaken to remove some offensive comments from its bulletin board. Because it had exercised a degree of editorial control over the content on its platform, the court

As Federal courts have acknowledged, the standard for 23 common-law distributor liability -- whether the defendant knew or had reason to know of the allegedly defamatory information -- would have a chilling effect on speech similar to that caused by publisher liability; if liability were to attach as soon as an individual or entity notified the service provider of a message's defamatory nature (i.e., provided the service provider with reason to know of the message's defamatory nature), providers would "have a natural incentive simply to remove messages upon notification, whether the contents were defamatory or not." Zeran, 129 F.3d at 333.

held that the interactive computer service provider was liable as a publisher at common law for defamatory comments posted by a third-party user on the provider's online bulletin board. Id., citing Restatement (Second) of Torts § 578 (1977). In so doing, the court remarked that had the provider simply done nothing editorially, like CompuServe, it would not have been liable. See Prodigy, supra at 1798 (provider's "conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice"). Concerned that interactive computer service providers would either limit speech on their platforms or cease efforts to 24 remove inappropriate content in order to avoid liability in 25

See 141 Cong. Rec. H8472 (Aug. 4, 1995) (statement of 24 Rep. Bob Goodlatte) (§ 230 encourages content moderation but "doesn't violate free speech or the right of adults to communicate with each other"); id. at H8471 (statement of Rep. Zoe Lofgren) ("I very much endorse the [§ 230] amendment, and I would urge its approval so that we preserve the [F]irst [A]mendment and open systems on the Net"). See 141 Cong. Rec. H8470 (Aug. 4, 1995) (statement of 25 Rep. Christopher Cox) (observing that we can "keep[] pornography away from our kids" without "hav[ing] a Federal Computer Commission with an army of bureaucrats regulating the Internet"); id. (statement of Rep. Ron Wyden) (opposing solution that would "put in place the Government rather than the private sector about this task of trying to define indecent communications and protecting our kids"); id. at H8471 (statement of Rep. Bob Goodlatte) ("The [§ 230] amendment is a thoughtful approach to keep smut off the net without government censorship").

light of these decisions, Congress enacted § 230. See 141 Cong. Rec. H8470 (Aug. 4, 1995) (statement of Rep. Christopher Cox) (§ 230 "will do two basic things: [f]irst, it will protect computer Good Samaritans, online service providers . . . who take[] steps to screen indecency and offensive material . . . from taking on liability," and "[s]econd, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet"); S. Rep. No. 104-230, at 194 (1996) ("One of the specific purposes of this section is to overrule Stratton- Oakmont v. Prodigy and any other similar decisions which have treated [interactive computer service providers] and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material"); id. (decisions like Prodigy "create serious obstacles to the important federal policy of empowering parents to determine the content of communications their children receive through interactive computer services"); 141 Cong. Rec. H8470 (Aug. 4,

  1. (statement of Rep. Christopher Cox) (§ 230 "will protect [interactive computer service providers] from taking on liability such as occurred in the Prodigy case"). In sum, Congress intended to immunize interactive computer service providers against claims that would hold them liable as intermediaries for injuries caused by information provided by

third-party users of their platforms. See Henderson, 53 F.4th 26 at 121 (because § 230[c][1] derives its significance from defamation law, "the scope of 'the role of a traditional publisher,' and therefore the scope of what § 230[c][1] protects, is guided by the common law"); Batzel v. Smith, 333 F.3d 1018, 1026 (9th Cir. 2003), cert. denied, 541 U.S. 1085 (2004), overruled on other grounds by Gopher Media LLC v. Melone, 154 F.4th 696, 699 (9th Cir. 2025) ("§ 230[c][1] . . . overrides the traditional treatment of publishers, distributors, and speakers under statutory and common law"); Zeran, 129 F.3d at 332 ("The terms 'publisher' and 'distributor' [as relevant to § 230(c)(1)] derive their legal significance from the context of defamation law"). True to Congress's intent and the common-law meaning of "treated as the publisher . . . of any information," we decline Meta's invitation to read § 230(c)(1) immunity so broadly as to encompass all claims that implicate publishing activities regardless of whether the claims seek to hold the service provider liable for the content of the information published. See Henderson, 53 F.4th at 117 (Section 230 "is not a license to

Because of the ambiguity in the meaning of the phrase 26 "treated as the publisher . . . of any information," we look to the historical context to help inform Congress's intent, recognizing that "the language of the statute does not limit its application to defamation cases." Barnes, 570 F.3d at 1101.

do whatever one wants online. Protection under § 230[c][1] extends only to bar certain claims imposing liability for specific information that another party provided"). Instead, a claim treats a provider as a publisher of information where it meets both the dissemination and content elements. Indeed, in each case relied on by Meta where the court determined that

§ 230 immunity applied, save one, the harm alleged was 27 28 traceable to the content of the information published.

See Doe v. Grindr Inc., 128 F.4th 1148, 1152-1153 (9th 27 Cir.), cert. denied, 146 S. Ct. 319 (2025) (§ 230 protected defendant against defective design, defective manufacturing, and negligence claims, which sought to hold defendant liable "for facilitating communication among users for illegal activity, including the exchange of child sexual abuse material" [emphasis added]); M.P., 127 F.4th at 525 (§ 230[c][1] shielded defendant against tort claim challenging content-sorting algorithm where, in essence, plaintiff took "issue with the fact that Facebook allows racist, harmful content to appear on its platform and directs that content to likely receptive users to maximize Facebook's profits" [emphasis added]); YOLO, 112 F.4th at 1182 ("Plaintiffs' product liability claims attempt to hold YOLO responsible as the speaker or publisher of harassing and bullying speech" [emphasis added]); Herrick v. Grindr LLC, 765 Fed. Appx. 586, 590 (2d Cir.), cert. denied, 589 U.S. 976 (2019) ("products liability claims arise from the impersonating content that [the plaintiff's] ex-boyfriend incorporated into profiles he created and direct messages with other users" and therefore seek to hold provider "liable for its failure to combat or remove offensive third-party content" [emphases added]); Dyroff

  1. Ultimate Software Group, Inc., 934 F.3d 1093, 1097-1098 (9th Cir. 2019), cert. denied, 590 U.S. 942 (2020) (design defect claim challenging content-neutral algorithm that connected two users treated platform as publisher where claim sought to hold platform liable for one user's death from heroin purchased using information provided by second user); Force, 934 F.3d at 65 (claim challenging Facebook's content suggestion algorithm treated Facebook as publisher where terrorist organization provided all complained-of content that allegedly encouraged terrorist attacks on plaintiff victims; claim sought to "hold Facebook liable for 'giving [the terrorist organization] a forum with which to communicate and for actively bringing [the organization's] message to interested parties'" via its recommendation algorithm); Kimzey, 836 F.3d at 1266 (plaintiff could not "plead around the CDA to advance the same basic argument that the statute plainly bars: that Yelp published user-generated [defamatory] speech that was harmful to [the plaintiff]" [emphasis added]); Backpage.com, 817 F.3d at 19-20 (immunity applied where third-party users provided relevant information used to ensnare plaintiffs in sex trafficking ring); Barnes, 570 F.3d at 1103 (immunity applied where claim faulted

provider for failing to "remov[e] . . . the indecent profiles that [the plaintiff's] former boyfriend posted" [emphasis added]); Doe (K.B.) v. Backpage.com, LLC, 768 F. Supp. 3d 1057, 1062 (N.D. Cal. 2025) (civil sex trafficking and products liability claims barred because complaint sought "to hold Meta responsible based on its role as a publisher of [sex trafficking related] third-party content that it failed to moderate or otherwise remove"); Lama v. Meta Platforms, Inc., 732 F. Supp. 3d 214, 221 (N.D.N.Y. 2024) ("Although Plaintiff attempts to argue that his claims are not 'based upon the content of the vile text messages' posted to Instagram, he promptly shows that he cannot sustain that argument . . ."); L.W. v. Snap Inc., 675

  1. Supp. 3d 1087, 1097 (S.D. Cal. 2023) ("the alleged flaws in
    Snapchat's design, in essence seek to impose liability on [Snap] based on how well [Snap] has designed its platform to prevent the posting of third-party content containing child pornography and to remove that content after it is posted" [quotation and citation omitted; emphasis added]); Fields v. Twitter, Inc., 217

  2. Supp. 3d 1116, 1120 (N.D. Cal. 2016), aff'd, 881 F.3d 739
    (9th Cir. 2018) (provider immune from claims seeking to hold it "liable for allowing [a terrorist group] to use its network to spread propaganda and objectionable, destructive content" [emphasis added]); Doe vs. Snap, Inc., No. H-22-00590, slip op. at 28 (S.D. Tex. July 7, 2022) ("Doe's negligent undertaking, negligent design, and gross negligence claims all seek to hold Snap liable for [inappropriate] messages and photos sent by" teacher to student); Federal Trade Comm'n vs. Match Group, Inc., No. 3:19-CV-2281-K, slip op. at 19 (N.D. Tex. Mar. 24, 2022) (immunity applied where "underlying communication created by a third-party . . . is truly 'the specific harmful material at issue,' not the automatically generated advertisement sent by" provider [emphasis added]); V.V. vs. Meta Platforms, Inc., No. X06-UWY-CV-23-5032685-S (Conn. Super. Ct. Feb. 16, 2024) (parents who filed product liability claims against Meta on behalf of minor child conceded that "harmful content at issue" -- highly sexualized content, content about self-harm and disordered eating, unsolicited photographs of male genitalia, and sexually inappropriate messages -- was provided by third- party users); In re Facebook, Inc., 625 S.W.3d at 93-95 (negligence and products liability claims barred because they sought to hold Facebook liable for serving as intermediary for injurious messages by sex traffickers). To be sure, Meta also relies on one Federal District 28 Court overseeing a multidistrict litigation asserting claims similar to those asserted by the Commonwealth here. See In re

  3. Third-party content requirement. The next requirement
    of § 230(c)(1) immunity is that the relevant "information [was] provided by another information content provider" (emphasis added). 47 U.S.C. § 230(c)(1). For purposes of our present analysis, it suffices to reaffirm that § 230(c)(1) does not provide immunity where liability is based on the provider's own speech. See Turo, 487 Mass. at 240, quoting Universal Communication Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 419 (1st Cir. 2007) ("'an interactive computer service provider remains liable for its own speech' and for its own unlawful conduct"). With this understanding of § 230(c)(1), we turn to examine the Commonwealth's claims to determine whether they seek to treat Meta as a publisher or speaker by imposing liability based on the content of the information it publishes, and whether the source of the information is Meta itself.

Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 753 F. Supp. 3d 849, 880-883 (N.D. Cal. 2024) (addressing consumer protection claims); In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F. Supp. 3d 809, 830-834 (N.D. Cal. 2023) (addressing negligent design claims). The court did not appear to consider the common-law origins of publisher liability or the statute's legislative history. We are not persuaded by its reasoning. The matter is now pending before the United States Court of Appeals for the Ninth Circuit. See California vs. Meta Platforms Inc., U.S. Ct. App., Nos. 24-7032, 24-7037, 24-7265, 24-7300, 24-7304, 24-7312 (9th Cir.) (oral argument held Jan. 6, 2026).

  1. Unfair business practices (count I). The unfair business practices claim does not seek to hold Meta liable based on the content of the information Meta publishes and as such does not meet the content element. The challenged design features (e.g., infinite scroll, autoplay, IVR, and ephemeral content) concern how, whether, and for how long information is published, but the published information itself is not the source of the harm alleged. Instead, the claim alleges that the features themselves induce compulsive use independent of the content provided by third-party users. Meta contends that the unfair business practices claim treats it as a publisher of third-party information because, in the absence of third-party content, the design features could not facilitate addiction in young users. But the fact that the features require some content to function is not controlling; instead, as discussed supra, to satisfy the content element, we look to whether the claim seeks to hold Meta liable for harm stemming from third-party information that it published. Here, the unfair business practices claim does not; the Commonwealth alleges that the features themselves prolong users' time on the platform, not that any information contained in third-party posts does so. In this sense, the claim is indifferent as to

the content published. Consequently, the unfair business 29 practices claim in count I does not treat Meta as a publisher of information, and § 230(c)(1) immunity does not apply. 30 , 31 It is no answer, as Meta emphasizes, that § 230(c)(1) applies when Meta disseminates "any information provided by another" (emphasis added). 47 U.S.C. § 230(c)(1). While the

Because the features themselves prolong young users' time 29 on the platform and not any information contained in third-party posts, we also disagree with Meta that § 230(c)(1) immunity might preclude any eventual remedy. Indeed, Meta has not shown that, if it is found liable, any cure would require "changes to the content posted by the [platform's] users." Internet Brands, 824 F.3d at 851. See Lemmon v. Snap, Inc., 995 F.3d 1085, 1092 (9th Cir. 2021) (§ 230 immunity did not apply where defendant "could have satisfied its 'alleged obligation' -- to take reasonable measures to design a product more useful than it was foreseeably dangerous -- without altering the content that [the defendant's] users generate"). Moreover, at the least with respect to the notifications 30 feature, Meta appears to be the information content provider and thus not entitled to immunity on this additional basis. Relying on Lemmon, 995 F.3d at 1092-1093, the 31 Commonwealth argues that the Superior Court judge correctly concluded that the unfair business practices claim does not treat Meta as a publisher because the claim concerns Meta's conduct as a product designer, rather than as a publisher. In Lemmon, the challenged design feature was a speed filter and reward system that encouraged young users to capture footage of themselves driving at excessive speeds; the feature itself was dangerous regardless of whether the users posted the resulting footage on the platform. Id. at 1092. In other words, the claims did not concern the provider's publishing activity at all. By contrast, here, the challenged features are publishing tools that control how Meta publishes content to users of its platform. Because we rest our decision on the grounds set forth above, we need not decide whether the claim survives because it is pled as product design defect.

quoted language is an additional requirement that must be met for § 230(c)(1) to apply, Meta must also show that the claim "treat[s] [it] as the publisher . . . of any information." Id. As we explained supra, this phrase requires Meta to show that the claim (a) makes the defendant liable for intentionally or negligently publishing information to someone other than the subject of the information and (b) seeks to impose liability based on the content of the information published. Here, Meta has not shown the latter. Contrary to Meta's argument, the fact that a claim concerns publishing activities, including the use of algorithms in connection with publishing activities, is not enough to bring the claim within the immunity provided by § 230(c)(1). The United States Court of Appeals for the Second Circuit's decision in Force is not to the contrary. There, the court addressed the plaintiffs' argument that Facebook did not "act as the publisher of Hamas's content within the meaning of [§] 230(c)(1) because it uses algorithms to suggest content to users, resulting in 'matchmaking'" (emphasis added). Force, 934 F.3d at 65. The focus of the claims at issue was the harm caused by the published information itself -- namely, the terrorist content posted by Hamas. Id. See id. at 59 (describing use of Facebook to "encourage[] terrorist attacks," "to celebrate these attacks," "to transmit political messages, and to generally

support further violence," thus "enabl[ing] Hamas 'to disseminate its messages directly to its intended audiences'"). Thus, the court did not determine that § 230(c)(1) broadly protects a provider's algorithm driven editorial decision-making regardless of whether the claim alleges harm stemming from the content of the information published; instead, the court concluded that § 230(c)(1) immunity extended to Facebook's conduct in specifically publishing Hamas's terrorist messages and using that content to form connections between users, including through its algorithms. Accord YOLO, 112 F.4th at 1180 (§ 230[c][1] barred products liability claim challenging messaging anonymity feature where injury alleged -- teen suicide and mental health challenges -- stemmed from harassing content of third-party users' messages); Dyroff v. Ultimate Software Group, Inc., 934 F.3d 1093, 1095, 1097-1098 (9th Cir. 2019), cert. denied, 590 U.S. 942 (2020) (§ 230[c][1] barred claim challenging content-neutral algorithm that connected decedent and drug dealer where injury alleged -- decedent's overdose -- stemmed from harmful communications between decedent and drug dealer concerning sale of heroin).

  1. Deceptive business practices (count II). Section 230(c)(1) does not bar the deceptive business practices claim because it is based on Meta's own speech -- its allegedly false statements that Instagram is safe and not addictive, and that

Meta prioritizes young users' well-being, despite internal reports and communications suggesting awareness of the harmful effects of Instagram. See Turo, 487 Mass. at 245 n.3 ("An 32 Internet service provider remains liable for its own speech . . . as Turo does here by creating speech through the language of this search feature [facilitating unauthorized transactions]").

  1. Deceptive and unfair business practices (count III). Meta argues that § 230(c)(1) bars the deceptive and unfair business practices claims in count III challenging its age- gating measures as ineffective because deciding who can access Meta's services implicates Meta's role as a publisher. Section 230(c)(1) does not bar the deception claim in count III for the same reasons that the provision does not bar the deception claim in count II -- the claim focuses on Meta's own affirmative misstatements about the inaccessibility of its platform to underage users. The unfair business practices claim in count III focuses on the ineffective design of Instagram's age-gating mechanism, not

Meta contends that the deception challenges are barred 32 insofar as they would impose liability on Meta for failing to warn users about the allegedly addictive nature of its design features or any specific harmful content on Instagram. Meta misapprehends the Commonwealth's claim. The deception claim in count II does not concern a failure to warn, and indeed, the Commonwealth disclaims raising such a claim.

on the content of any published information; accordingly, § 230(c)(1) does not protect Meta. The Federal District Court decision in Backpage.com does not support Meta's argument that claims challenging age-gating features are barred by § 230(c)(1) regardless of the content published. See Backpage.com, 768 F. Supp. 3d at 1064-1065. In that case, the plaintiff created an Instagram account and was connected with a third-party user, a sex trafficker, who then repeatedly sold the plaintiff for sex on the platform. Id. at 1060. The plaintiff sued Meta, alleging, inter alia, that Meta designed an unreasonably dangerous product by failing to include identity and age verification requirements as part of the account setup process. Id. at 1064. The court concluded that § 230(c)(1) barred the claim as the plaintiff's "theory of liability derives from Meta's alleged failure to include adequate age and identity verification measures that would prevent the creation of fake accounts facilitating sex trafficking communications" (emphasis added). Id. at 1065. See id. (§ 230[c][1] immunity applied as plaintiff's "proposed verification requirements [were] designed with the particular purpose of limiting . . . the types of content likely to be posted," i.e., "sex trafficking content"). Accord Doe v. Grindr Inc., 128 F.4th 1148, 1153 (9th Cir.), cert. denied, 146 S. Ct. 319 (2025) (§ 230[c][1] barred design defect claim challenging features that matched adults and

children for illegal sexual activity because "the defendant acted as a publisher, and third-party communications caused the harm to the victim [emphasis added]). Here, the Commonwealth asserts that the defective age verification features give rise to heightened harm to underage users who are particularly vulnerable to the design features challenged in count I. The Commonwealth does not allege harm stemming from the content of any information published. 33

  1. Conclusion. For the foregoing reasons, we conclude that Meta's appeal from the order denying the motion to dismiss insofar as it concerns immunity pursuant to § 230(c)(1) is properly before us under the doctrine of present execution, and we affirm the Superior Court's order as it pertains to

So ordered.

Because we conclude that § 230(c)(1) does not bar counts 33 I to III, we also conclude that it does not bar the Commonwealth's public nuisance claim, which is predicated on the same allegedly unfair and deceptive practices in counts I to

III.

Named provisions

Section 230(c)(1)

Get daily alerts for Massachusetts SJC New Opinions

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from MA SJC.

What's AI-generated?

The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.

Last updated

Classification

Agency
MA SJC
Filed
April 10th, 2025
Instrument
Enforcement
Legal weight
Binding
Stage
Final
Change scope
Substantive
Document ID
SJC-13747
Docket
SJC-13747

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
Social media platform design Consumer marketing practices
Geographic scope
Massachusetts US-MA

Taxonomy

Primary area
Consumer Protection
Operational domain
Legal
Topics
Data Privacy

Get alerts for this source

We'll email you when Massachusetts SJC New Opinions publishes new changes.

Free. Unsubscribe anytime.

You're subscribed!