The “Punch Google” Federal Case

Judge Beth Labson Freeman Stated On Monday November 2nd,
“I am inclined to deny this temporary restraining order.”

Share this Article with this Link:

November 4th Update from Maryam Henein:  TRO Denied — Lawsuit Continues!


John Doe vs Google, the Federal lawsuit initiated by 17 of the thousands of channels banned in the Great Social Media Purge of 2020 went before Federal Judge Freeman in California on the day before the election, Monday, November 2nd.  The Judge reserved the decision, thereby preventing serious allegations of political corruption from being aired on the Google/YouTube platform prior to the election.

We are awaiting the Judge’s written decision, so this article is based on notes taken by Maryam Henein of the group. Her article is here:

After the hearing, Cris Armenta Esq., Plaintiff’s attorney, stated:  “We are waiting for the Court’s decision, and then we will see what the next step will be.  She was unmoved by the breach of contract claim and indicated she would not look at the content to determine whether it was violative or a material violation. So, that was strange indeed.”

The Judge’s refusal to look at the content of the banned channels means she is not reading Section 230 of the Telecommunications Act to be about the content (which is clearly contrary to the words of the statute) but considers the statute to be some sore of blanket authorization to the social media giants to remove whatever may be “objectionable” in the subjective view of the defendants’ agents. [Note:  Google did not raise Section 230 in its defense — possibly because the statute does not protect what Google did!]

This is directly contrary to the implication of SCOTUS’ refusal to grant certiorari in the Enigma case just a few days ago.  Apparently the Judge decided not to consider the clear warning from the high court that Section 230 does not give the social media platforms unlimited power to censor content.

Maryam Heinen commented, the Judge “did not even know how Youtube works! How can Google claim the plaintiffs and all who have been banned are creating ‘violent cyberbullying content’? Insanity!”

For further information on the case, and on the Supreme Court’s directive which Judge Freeman seems to have chosen to ignore, see:

That article includes a link to Justice Thomas’ comments in the Enigma case.  This excerpt suggests that Judge Freeman ought to reconsider her inclination.

…in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms… Enacted at the dawn of the dot-com era, §230 contains two subsections that protect computer service providers from some civil and criminal claims. The first is definitional. It states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” §230(c)(1). This provision ensures that a company (like an e-mail provider) can host and transmit third party content without subjecting itself to the liability that sometimes attaches to the publisher or speaker of unlawful content. The second subsection provides direct immunity from some civil liability. It states that no computer service provider “shall be held liable” for (A) good-faith acts to restrict access to, or remove, certain types of objectionable content; or (B) giving consumers tools to filter the same types of content. §230(c)(2). This limited protection enables companies to create community guidelines and remove harmful content without worrying about legal reprisal… ***

The decisions that broadly interpret §230(c)(1) to protect traditional publisher functions also eviscerated the narrower liability shield Congress included in the statute. Section 230(c)(2)(A) encourages companies to create content guidelines and protects those companies that “in good faith . . . restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Taken together, both provisions in §230(c) most naturally read to protect companies when they unknowingly decline to exercise editorial functions to edit or remove third-party content, §230(c)(1), and when they decide to exercise those editorial functions in good faith, §230(c)(2)(A).
But by construing §230(c)(1) to protect any decision to edit or remove content, Barnes v. Yahoo!, Inc., 570 F. 3d 1096, 1105 (CA9 2009), courts have curtailed the limits Congress placed on decisions to remove content, see e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029, *3 MD Fla., Feb. 8, 2017) (rejecting the interpretation that §230(c)(1) protects removal decisions because it would “swallo[w] the more specific immunity in (c)(2)”). With no limits on an Internet company’s discretion to take down material, §230 now apparently protects companies who racially discriminate in removing content. Sikhs for Justice, Inc. v. Facebook, Inc., 697 Fed. Appx. 526 (CA9 2017), aff ’g 144 F. Supp. 3d 1088, 1094 (ND Cal. 2015) (concluding that “‘any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune’” under §230(c)(1))…” ***

Extending §230 immunity beyond the natural reading of the text can have serious consequences. Before giving companies immunity from civil claims for “knowingly host[ing] illegal child pornography,” Bates, 2006 WL 3813758, *3, or for race discrimination, Sikhs for Justice, 697 Fed. Appx., at 526, we should be certain that is what the law demands. Without the benefit of briefing on the merits, we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.” [emphasis added]


One thought on “The “Punch Google” Federal Case

Leave a Reply

Your email address will not be published. Required fields are marked *