By Javier Surasky
![]() |
Large
digital platforms have long pushed a narrative that worked very much in their
favor: if something harmful happened as a result of their use, the
responsibility lay with users, those posting content, or, at most, with
specific failures in moderation systems. That framing kept the platforms
themselves out of the spotlight. That narrative is now starting to break down.
The verdict
in K.G.M. v. Meta Platforms, Inc., et al. (K.G.M. vs. Meta and
Google/YouTube), delivered in Los Angeles on March 25, 2026, which initially
included TikTok and Snap as defendants, though they opted to settle and step
out of the case before trial, shifts the focus away from what circulates on social
media toward how these networks are designed to capture users' attention (using
platform "architecture," meaning the technical features and design
choices that influence user behavior), keep them engaged, and shape their
actions online.
A shift
in focus
A Los
Angeles jury found, by a 10–2 vote, that Meta and Google/YouTube were negligent
and failed to adequately warn users about the risks tied to their platforms.
While the ruling currently applies to a single 20-year-old plaintiff, K.G.M.,
it is a “bellwether case” that opens the door to more than 2,000 claims already
filed in U.S. courts. Its weight, therefore, goes beyond an individual lawsuit,
pointing the way forward for future litigation involving youth and social
media.
One key issue, highly significant but not widely discussed, is that the case pinned the source of harm squarely on the platform’s own architecture.
The imposed
USD 6 million pecuniary penalty, 70% to be paid by Meta and 30% by
YouTube/Google, matters less for its size (which is still substantial) than for
the criteria behind it: the jury held that both companies’ conduct was a
substantial factor in causing the psychiatric harm suffered by the plaintiff,
which led to social media addiction.
This cracks
open what had seemed like an unbreakable shield in lawsuits against platforms:
Section 230, a U.S. law stating that digital platforms are not publishers of
third-party content and therefore cannot be treated as such, granting them
broad protection from liability for user-posted material.
From
content to design
What made
this shift possible is that the legal teams representing K.G.M. (Beasley Allen and The Lanier Firm) didn’t
go after content—they zeroed in on platform architecture and the duty to warn
about risks. This change in approach carries wide political implications: if
the issue lies not only in what users post or consume, but also in features
designed to maximize time on site, attention, and engagement, then platforms
stop being mere intermediaries and start showing up as architects of behavior
within these digital environments.
It was a
bold move. An advisor to the U.S. Surgeon General had already pointed out in a
consultative opinion on Social Media and Youth Mental Health (2023) that limited access to data and the
lack of transparency from tech companies make it harder to understand social
media’s impact on youth mental health. That, in turn, makes it harder to assess
risks, establish causality, and develop evidence-based regulatory responses.
Although the full official text of the special verdict form or final ruling has not yet been made public (this is being written just one day after the decision was announced), at least six major outlets (Reuters, BBC, Deutsche Welle, Associated Press, The Guardian, The Washington Post, entre otras) report that the jury found negligence and failures to warn on the part of both companies, concluding that they substantially contributed to the harm suffered by K.G.M.
Possible
external impacts
While this
is a domestic case, and the companies have already said they plan to appeal, it
is likely to ripple outward internationally. It comes at a time when multiple jurisdictions are moving to step up regulation of social media platforms. The
European Union has issued guidelines on child protection under the Digital
Services Act. The United Kingdom, through its Online Safety Act, has introduced
specific obligations to reduce children’s exposure to online harm. Australia
has moved forward with age restrictions for social media use and requirements
for platforms to prevent structural risks.
In that
context, the verdict reinforces a growing global trend: moving away from
content-focused regulation toward a safety-by-design model.
Conclusion:
a changing landscape
For too
long, public debate around digital platforms has revolved around moderation,
freedom of expression, disinformation, and censorship. While all of these
issues matter, attention is shifting toward a more structural question: what
responsibility do designers bear when they build systems whose profitability
depends on fostering dependency and sustained exposure to risk?
Whether or
not the ruling is appealed, and its final outcome will be, the K.G.M. v. Meta
and YouTube/Google case makes one thing clear: global conversations on platform
regulation and governance are entering a new phase.
