Google, Twitter Supreme Court Cases Won’t Break the Internet

[ad_1]

Despite all the furor, the future of the internet does not hinge on a pair of cases argued this week at the US Supreme Court. There’s no risk that the statutory immunity that Congress granted long ago to internet service providers will collapse. The justices are being asked to decide a narrow and technical legal question. Should the ISPs lose, they’ll make a handful of tweaks in the algorithms they employ to sort content. The experience of most users will barely budge.

The two cases that have sparked the dire predictions involve lawsuits against Google and Twitter, respectively. The suits were filed by families who have lost loved ones to vicious acts of terrorism. The central allegation is that the companies abetted those acts through the videos and other materials they made available to users. The justices aren’t being asked to decide whether the allegations are true but whether the cases should go to trial, in which case the jury would determine the facts.

Google is being sued based on the recommendations that YouTube’s algorithms make to users in the familiar “up next” box. Twitter is accused of making insufficient efforts to remove pro-terror postings. The immunity issue is squarely presented only in the Google case. But because a Google victory would almost certainly bar the lawsuit against Twitter, the immunity argument is worth considering in detail.

The relevant question before the court is how to interpret Section 230(c)(1) of the Communications Decency Act, adopted by Congress in 1996, after a New York court held an ISP liable for purported defamatory material posted on a message board it hosted.

The text is straightforward: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” When commentators refer to the statutory immunity of ISPs, this is the main provision they have in mind.

Here’s how the statute works: If I upload a video to YouTube, I’m the content provider, but YouTube is neither the speaker nor the publisher. Therefore, should my video cause harm — defamation, say — YouTube isn’t liable.

Seems simple, right? But now we come to what the justices must decide: If Google creates an algorithm that recommends my harmful video to you, is the video still provided by “another” provider, or is the provider now YouTube itself? Or, in the alternative argument, does the algorithm’s recommendation transform Google into the video’s publisher? Either interpretation of the statute would allow the plaintiffs to circumvent the statutory immunity.

Those aren’t easy questions to answer. But they also aren’t policy questions that should be tossed back to Congress. They involve nothing but the ordinary, everyday work of the courts, the determination of the meaning of a statute that’s susceptible to more than one interpretation.

In fact, the courts have ruled often on the bounds of Section 230 immunity. In perhaps the best-known example, the US Court of Appeals for the 9th Circuit ruled in 2008 that the section offered no protection to a roommate-matching site that required users to answer questions that those offering housing could not legally ask. The questions, wrote the court, made the site “the developer, at least in part” of the relevant content.

In the Google case, on the other hand, the 9th Circuit held that the selection algorithm is just a tool to help users find the content they want, based on what the users themselves have viewed or searched for. Using the algorithm didn’t make Google the creator or developer of the ISIS recruitment videos that are the centerpiece of the case because the company did not materially contribute to the videos’ “unlawfulness.” Judge Ronald Gould’s dissent took the view that the plaintiffs should be allowed to go to trial on their claims that Google “knew that ISIS and its supporters were inserting propaganda videos into their platforms” and should share legal liability because YouTube, through its selection algorithms, “magnified and amplified those communications.”

At oral argument in the Google case, Justice Ketanji Brown Jackson wondered whether the ISPs are turning Section 230 inside out. The provision was written, she said, to allow the companies to block certain offensive materials. How, she asked, was it “conceptually consistent with what Congress intended” to use the section as a shield for promoting offensive materials?

The answer depends on whether using an algorithm to decide which content to recommend is the same as saying to the user “This is great stuff that we fully endorse!” Here, my own view is that Big Tech has the better of the argument. But the case is an extremely close one. And I certainly don’t think that a court ruling against the ISPs would cause the sky to fall.

Google warns in its brief that should the plaintiffs’ interpretation of Section 230 prevail, the company will be left with no means to sort and categorize third-party videos, to say nothing of deciding which if any to recommend to a given user. And the company goes further: “Virtually no modern website would function if users had to sort through content themselves.”

Good points! But not as good as they would be if the company’s YouTube subsidiary, along with other ISPs, hadn’t spent so much time in recent years tweaking algorithms to meet government objections to the content recommended to users. Which is to say, should the ISPs lose, I think they would work it out.

I suspect that what worries the ISPs is less the potential complexity of compliance with a smaller immunity and more the flood of lawsuits, many ungrounded, that would surely follow. That’s a genuine worry — and unlike the proper interpretation of a statute, it’s exactly the sort of problem that we might want Congress to resolve.


[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *