Google Defends Section 230 in Supreme Court Terror Case

WASHINGTON, December 8, 2022 — Holding tech companies liable for the presence of terrorist content on their platforms risks substantially limiting their ability to effectively moderate content without overly restricting speech, according to several industry associations and civil rights organizations.

The Computer & Communications Industry Association, along with seven other tech associations, filed an amicus brief Tuesday emphasizing the vast amount of online content generated on a daily basis and the existing efforts of tech companies to remove harmful content.

A separate coalition of organizations, including the Electronic Frontier Foundation and the Center for Democracy & Technology, also filed an amicus brief.

Supreme Court to hear two social media cases next year

The briefs were filed in support of Twitter as the Supreme Court prepares to hear Twitter v. Taamneh in 2023, alongside the similar case Gonzalez v. Google. The cases, brought by relatives of ISIS attack victims, argue that social media platforms allow groups like ISIS to publish terrorist content, recruit new operatives and coordinate attacks.

Both cases were initially dismissed, but an appeals court in June 2021 overturned the Taamneh dismissal, holding that the case adequately asserted its claim that tech platforms could be held liable for aiding acts of terrorism. The Supreme Court will now decide whether an online service can be held liable for “knowingly” aiding terrorism if it could have taken more aggressive steps to prevent such use of its platform.

The Taamneh case hinges on the Anti-Terrorism Act, which says that liability for terrorist attacks can be placed on “any person who aids and abets, by knowingly providing substantial assistance.” The case alleges that Twitter did this by allowing terrorists to utilize its communications infrastructure while knowing that such use was occurring.

Gonzalez is more directly focused on Section 230, a provision under the Communications Decency Act that shields platforms from liability for the content their users publish. The case looks at YouTube’s targeted algorithmic recommendations and the amplification of terrorist content, arguing that online platforms should not be protected by Section 230 immunity when they engage in such actions.

Justice Clarence Thomas tips his hand against Section 230

Supreme Court Justice Clarence Thomas wrote in 2020 that the “sweeping immunity” granted by current interpretations of Section 230 could have serious negative consequences, and suggested that the court consider narrowing the statute in a future case.

Experts have long warned that removing Section 230 could have the unintended impact of dramatically increasing the amount of content removed from online platforms, as liability concerns will incentivize companies to err on the side of over-moderation.

Without some form of liability protection, platforms “would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content,” the EFF and other civil rights organizations argued.

Platforms are already self-motivated to remove harmful content because failing to do so can risk their user base, CCIA and the other tech organizations said.

There is an immense amount of harmful content to be found on online and moderating it is a careful, costly and iterative process, the CCIA brief said, adding that “mistakes and difficult judgement calls will be made given the vast amounts of expression online.”



Source

Originally posted on January 13, 2023 @ 6:06 pm