Judge Elena Kagan opened up Tuesday: “We are a court. We really don’t know about these things. You know, these are not the top nine Internet experts.” The Supreme Court of the United States was analyzing the scope of “the 26 words that have created the internet today”, as defined by Google’s lawyer (copying an expression used many times). This is the article that allows platforms and social networks to veto content that they consider inappropriate and that at the same time shields them from responsibility for the content uploaded by third parties. Both in that hearing on Tuesday and in the one held this Wednesday, the judges seemed to be inclined to maintain that shielding.
Two different cases were discussed. In Tuesday’s I was in the spotlight if the recommendations of the YouTube algorithm (and by extension of any social network) are protected in the same way as the content of third parties. The relatives of Nohemi Gonzalez, one of the victims of the attacks of the Islamic State that shocked Paris on November 13, 2015, in the Bataclan concert hall and other places in the French capital, sued Google, owner of YouTube, for the dissemination of videos of the Islamic State.
In the case whose hearing was held this Wednesday, it was analyzed whether social networks in general, and Twitter as the first defendant in particular, had favored the development of certain terrorist organizations. In this case, the lawsuit was filed by the relatives of a victim of the terrorist attack against the Reina nightclub in Istanbul in which 39 people died at the New Year’s Eve party in 2016, New Year’s Eve in 2017.
The Supreme Court judges will not be the greatest experts on the Internet, as Kagan acknowledged, but they did reveal their position. The fact that they had admitted both cases at the same time and the doubts expressed in the past about this discharge of responsibility led one to think that they were dealing with cases willing to change the interpretation of the rule. However, from what has been heard in the last two days, it can be deduced that they are inclined to maintain that shielding.
In the first case, Gonzalez vs. Google, The lawyer for the relatives of Nohemi Gonzalez has been changing arguments. In the end, his complaint centered on the way YouTube’s algorithm invites viewers of Islamic State videos to view similar ones. “In some circumstances, the way third-party content is organized or presented could convey other information from the defendant himself,” he said, reducing his argument to the thumbnails the platform suggests while watching another video.
The judges immediately showed their skepticism. Clarence Thomas and other judges said they found it normal for YouTube to show cat videos to those who watch cat videos, or cooking to those who watch cooking or racing or ISIS videos… “I think you have to give us an example clearer of what exactly he means”, he challenged. John Roberts, chief justice, also said he had a hard time holding Google accountable if there is a generic algorithm, not one intended to promote terrorist content.
Elena Kagan spoke along the same lines: “This is a pre-algorithm provision and everyone is doing everything they can to figure out how it applies in a post-algorithm world,” she said, later concluding: “Algorithms are endemic to the Internet , that every time someone looks at something on the Internet, there is an algorithm involved.
Although the González family lawyer insisted that when the platform begins to suggest things that you have not expressly requested, it should no longer be shielded, it did not seem to convince the judges. “I don’t understand how a neutral suggestion about something you’ve expressed interest in is complicity. [con el terrorismo]. I’m trying to get him to explain to us how something that’s standard on YouTube for pretty much anything you’re interested in suddenly equals complicity because you’re in the Islamic State category,” Thomas told him.
Judge Samuel Alito also seemed to reject that argument: “I am afraid that I am completely confused with the argument that you are making. (…) Is acting [Youtube] as a publisher just for showing these ISIS video thumbnails after an ISIS video search?” he said, reducing the argument to absurdity: “So if you want that article to protect you, you shouldn’t use thumbnails?”
Sonia Sotomayor and Elena Kagan said that perhaps there is an intermediate point between that weak argument of the thumbnails and not setting any limits. Faced with the need to draw that dividing line, Kagan wondered: “Isn’t that something Congress should do, not the Court?”
And Judge Brett Kavanaugh expressed his concern about all the opinions contributed to the case that indicate that removing this shielding puts the Internet as we know it at risk, about whether “to do that, withdraw now from the interpretation that has been in force, it would create a great economic shock, it would really break the digital economy with all kinds of effects on workers and consumers, retirement plans …”, he pointed out, concluding: “We are not in a position to account for it (… .) Are we really the appropriate body to depart from what had been the text and the coherent interpretation in the courts of appeals?”, also pointing to Congress.
Google’s lawyer, Lisa Blatt, defended the use of algorithms and recommendations as inherent to the activity of Internet services. “Every publication requires organization and inherently conveys that same implicit message,” she said. According to Blatt, section c.1. Section 230 “reflects Congress’s decision to protect websites for posting other people’s speech, even if they intentionally post other people’s harmful speech.” “Congress made that decision to prevent lawsuits from choking the Internet in its early days. The result has been revolutionary. Innovators opened new frontiers for the world to share infinite information, and websites necessarily pick, select and organize which third-party information users see first,” he added, warning: “Exposing websites to liability for implicitly recommending third-party content defies the text and threatens today’s internet.”
The case of this Wednesday, Twitter against Taamneh, had elements in common and others that were different. The lawsuit by the relatives of one of those killed in the Istanbul attack maintains that Twitter, Facebook and Google are responsible for the attack because the Islamic State used social networks to gain its notoriety, transmit its messages and, ultimately, capture terrorists. A lower court admitted the lawsuit and the Supreme Court judges must decide whether to go ahead or not.
Once again, the judges were more on the side of the technology companies than the other party, despite an intervention by Judge Kagan, in which she compared the technology companies with the banks, who are persecuted in cases of money laundering or financing of terrorism . “We are used to thinking of banks as very important service providers to terrorists. We may not be so used to it, but it seems to be true that various types of social media platforms also provide services to terrorists,” she said.
Most of the justices seemed to lean more towards the position presented by Justice Alito, who pointed out that it would not make sense for telephone companies to be held responsible for the criminal activity of the people who use their phones. What if the company “knows that a particular person has a criminal record and is likely to be engaged in criminal activity and uses the phone to communicate with other gang members? Is that complicity with the crimes they commit?
The anti-terrorism law punishes those who knowingly help a person to commit a terrorist act. Judge Amy Coney Barrett stressed that there are not enough concrete facts to support the claim, beyond general proclamations that the networks, and in particular Twitter, were used for general recruitment or radicalization of followers. But there is no concrete evidence in the lawsuit such as direct messages, comment threads or other indications that the network was being used to coordinate activities for a terrorist attack.
At one point, Judge Sonia Sotomayor asked Twitter, Facebook and Google attorney Seth Waxman to help her outline what an opinion would look like if the court ruled in favor of her clients. “Write it to me,” she told him. Judge Brett Kavanaugh summed up Waxman’s argument this way: “Where there is a legitimate business that provides services on a widely available basis…you are not going to be liable under this article, even if you know that bad people are using your services. services for bad things”. And Waxman agreed: “Correct, unless you have specific knowledge, in this case, it would be accounts or messages, which are, in fact, being used to plan or commit a terrorist act, including an attack like the one that injured the plaintiff. . That is, there must be concrete knowledge in that context. That’s our norm.”
The sentences of the two cases are expected for the month of June, when the judicial course ends before the holidays. It is the month in which the most relevant sentences tend to be concentrated.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.