Categories
Widget Image
Trending
Recent Posts
Wednesday, Dec 18th, 2024
HomeTechSCOTUS decision in Google, Twitter cases a win for algorithms too

SCOTUS decision in Google, Twitter cases a win for algorithms too

In a pair of lawsuits targeting Twitter, Google and Facebook, the Supreme Court had its first chance to take on the 1996 law that helped give rise to social media. But instead of weighing in on Section 230, which shields online services from liability for what their users post, the court decided the platforms didn’t need special protections to avoid liability for hosting terrorist content.

That finding issued Thursday is a blow to the idea, gaining adherents in Congress and the White House, that today’s social media platforms ought to be held responsible when their software amplifies harmful content. The Supreme Court ruled that they should not, at least under U.S. terrorism law.

“Plaintiffs assert that defendants’ ‘recommendation’ algorithms go beyond passive aid and constitute active, substantial assistance” to the Islamic State of Iraq and Syria, Justice Clarence Thomas wrote in the court’s unanimous opinion. “We disagree.”

The two cases were Twitter v. Taamneh and Gonzalez v. Google. In both cases, the families of victims of ISIS terrorist attacks sued the tech giants for their role in distributing and profiting from ISIS content. The plaintiffs argued that the algorithms that recommend content on Twitter, Facebook and Google’s YouTube aided and abetted the group by actively promoting its content to users.

Many observers anticipated the case would allow the court to pass judgment on Section 230, the portion of the Communications Decency Act passed in 1996 to protect online service providers like CompuServe, Prodigy and AOL from being sued as publishers when they host or moderate information posted by their users. The goal was to shield the fledgling consumer internet from being sued to death before it could spread its wings. Underlying the law was a concern that holding online forums responsible for policing what people could say would have a chilling effect on the internet’s potential to become a bastion of free speech.

But in the end, the court didn’t even address Section 230. It decided it didn’t need to, once it concluded the social media companies hadn’t violated U.S. law by automatically recommending or monetizing terrorist groups’ tweets or videos.

As social media has become a primary source of news, information and opinion for billions of people around the world, lawmakers have increasingly worried that online platforms like Facebook, Twitter, YouTube and TikTok are spreading lies, hate and propaganda at a scale and speed that are corrosive to democracy. Today’s social media platforms have become more than just neutral conduits for speech, like telephone systems or the U.S. Postal Service, critics argue. With their viral trends, personalized feeds and convoluted rules for what people can and can’t say, they now actively shape online communication.

The court ruled, however, that those decisions are not enough to find the platforms had aided and abetted ISIS in violation of U.S. law.

“To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal — and sometimes terrible — ends,” Thomas wrote. “But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large.”

Thomas in particular has expressed interest in revisiting Section 230, which he sees as giving tech companies too much leeway to suppress or take down speech they deem to violate their rules. But his apparent dislike of online content moderation is also consistent with today’s opinion, which will reassure social media companies that they won’t necessarily face legal consequences for being too permissive on harmful speech, at least when it comes to terrorist propaganda.

The rulings leave open the possibility that social media companies could be found liable for their recommendations in other cases, and perhaps under different laws. In a brief concurrence, Justice Ketanji Brown Jackson took care to point out that the rulings are narrow. “Other cases presenting different allegations and different records may lead to different conclusions,” she wrote.

But there was no dissent to Thomas’s view that an algorithm’s recommendation wasn’t enough to hold a social media company liable for a terrorist attack.

Daphne Keller, director of platform regulation at the Stanford Cyber Policy Center, advised against drawing sweeping conclusions from them. “Gonzalez and Taamneh were *extremely weak* cases for the plaintiffs,” she wrote in a tweet. “They do not demonstrate that platform immunities are limitless. They demonstrate that these cases fell within some pretty obvious, common sense limits.”

Yet the wording of Thomas’s opinion is cause for concern to those who would like to see platforms held liable in other sorts of cases, such as the Pennsylvania mother suing TikTok after her 10-year-old died attempting a viral “blackout challenge.” His comparison of social media platforms to cellphones and email suggests an inclination to view them as passive hosts of information even when they recommend it to users.

“If there were people pushing on that door, this pretty firmly kept it closed,” said Evelyn Douek, an assistant professor at Stanford Law School.

Source link

Print Friendly, PDF & Email

No comments

Sorry, the comment form is closed at this time.