30 years of legal protection expires! Meta and Google face a crisis

robot
Abstract generation in progress

A protective charm that tech giants have relied on to dodge legal responsibility for three decades is now facing unprecedented challenges.

Last week, Meta and YouTube, owned by Google, suffered defeats in two separate jury trials, with damages totaling about $400 million.

At the same time, multiple new lawsuits have been filed one after another, with plaintiffs’ attorneys systematically dismantling the long-standing legal immunity enjoyed by tech platforms by finding ways to get around Section 230 of the U.S. Communications Decency Act.

The U.S. Communications Decency Act was passed by the U.S. Congress in 1996 and signed into law by then-President Bill Clinton. This law allows websites to act as content moderators without being held responsible for the content ultimately retained.

Over the past three decades, platforms such as Meta, Google, TikTok, and Snap have all benefited from this provision, enabling them to define themselves as neutral platforms and thus avoid a large number of potential lawsuits.

As the tech industry moves from the era of traditional search and social networks into a new landscape led by artificial intelligence, the nature of legal risk is also quietly changing. Platforms are no longer just passively carrying users’ content; instead, they actively shape user experiences through algorithmic recommendations, autoplay, and even AI-generated content.

Two defeats at trial—product design becomes the breakthrough point

Last week, a plaintiff using the pseudonym Jane Doe filed a class-action lawsuit against Google, alleging that the company’s AI model created its own summaries and links, leaking the personal identifying information of Epstein’s victims, including names, phone numbers, and email addresses.

According to CNBC, plaintiff attorney Kevin Osborne said the lawsuit was filed because Google refused the plaintiffs’ request to delete victims’ contact information from the AI model. Osborne said that because information spreads extremely fast, the case must move quickly:

The reason we chose to file at that time is because we needed to take action as quickly as possible—to take these things down. People were getting calls from completely strangers and also receiving death threats. It’s basically a nightmare.

Osborne added that given Meta’s defeat in court last week, the timing was “pure coincidence,” but he said, the common thread in these cases is that the plaintiffs are trying to evade Section 230. Osborne said:

In his case, it’s an AI model generating its own content, and the courts haven’t really dug into it yet.

Last week, a jury in New Mexico found that Meta was liable in a case involving child safety; at the same time, a jury in Los Angeles found that Meta, Facebook’s parent company, was negligent in another personal injury case.

Both companies said they plan to appeal last week’s rulings.

Legislative gridlock and judicial outlook

At the level of the U.S. Congress, both parties have previously proposed various reform plans for Section 230 of the Communications Decency Act, but none have been implemented.

During his first term, Trump supported imposing more restrictions on social media companies; during his 2020 campaign, the Biden administration also publicly said the provision should be repealed.

Nadine Farid Johnson, policy director at the Knight First Amendment Institute at Columbia University, attributed the legislative difficulty to “these issues being extremely complex.”

Farid Johnson is currently calling for Congress to take a more cautious reform path, proposing that technology companies be allowed to obtain Section 230 protections only after meeting specific conditions such as data privacy and platform transparency.

She warned that:

As platforms continue to expand their use of generative AI and keep upgrading their algorithmic capabilities, legal challenges will become even more complicated. What we’re worried about is that every technical iteration turns into a game of whack-a-mole.

Legal experts say that after an appeal, the cases above may ultimately end up before the U.S. Supreme Court, where an authoritative ruling will be made on whether platforms can receive legal protection.

David Greene, senior legal counsel at the Electronic Frontier Foundation, also noted that there is currently no consensus in the legal community on whether product functions are protected by Section 230 of the Communications Decency Act—and even by the First Amendment. Greene said:

Labeling a particular function as a “design feature” is meaningless if, in essence, it is speech, and it is protected by both the First Amendment and Section 230 of the Communications Decency Act.

Risk warning and disclaimer

        The market carries risks; investment requires caution. This article does not constitute personal investment advice, nor does it take into account any specific investment objectives, financial situations, or needs of individual users. Users should consider whether any opinions, viewpoints, or conclusions in this article are consistent with their specific circumstances. Investing based on this is at your own risk, and liability rests with you.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin