Thursday, February 12, 2026

A Necessary Abomination: SHIFTING THE BURDEN OF PROOF

How to Hold Digital Platforms Accountable for Selective Enforcement

A Legal and Policy Framework for Transparent Content Moderation

February 2026


Executive Summary

Digital platforms claim Section 230 immunity based on “good faith” content moderation. Yet they refuse to provide the evidence that would prove good faith actually occurred. This white paper proposes a fundamental inversion of the burden of proof: platforms should have to demonstrate good faith, rather than forcing users to prove bad faith. Without this shift, Section 230 becomes a license for arbitrary censorship.

The current system produces three documented problems:

Absence of accountability: Users receive suspensions without specific citation of offending content

Suppression of organizing: Vague harassment policies enable selective suppression of boycotts and political campaigns

Regulatory capture: Platforms cite legal compliance while using regulations as cover for content decisions that contradict stated policies

This paper outlines a legislative and litigation framework to reverse this dynamic. The mechanism is simple: platforms claiming Section 230 immunity must document their enforcement in specific, verifiable, and transparent ways. Failure to do so forfeits immunity for that enforcement action.

THE ASK: What Should Happen Next

State Attorneys General (2026): File Civil Investigative Demands on platforms alleging deceptive practices under consumer protection statutes. No new law required. Timeline: 90-180 days to settlement.

Private Litigators (Q1 2026): File class actions on breach of contract and deceptive practices grounds. Use discovery to expose internal enforcement guidelines. Section 230 is not a defense to fraud.

Congress (2026-2027): Pass the Platform Accountability and Transparency Act (PATA). Condition Section 230 immunity on documented good faith. Establish user appeal rights and transparency reporting. Bipartisan coalition available now.

I. The Problem: Good Faith as Assumption

The Current Legal Standard

Section 230 of the Communications Decency Act, enacted in 1996, provides:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” unless the provider takes moderation action “in good faith” to remove or restrict access to material it deems obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.

For nearly three decades, courts have interpreted this standard permissively. The assumption has been: platforms are moderated in good faith unless proven otherwise. The burden falls on the challenger to show bad faith. This is backwards.

The Evidence Problem

Good faith requires transparent, consistent application of stated rules. Yet platforms systematically refuse to disclose:

The specific post or comment that triggered suspension

Which policy clause was violated and how

The reasoning for escalation decisions

Comparable enforcement data showing consistency

Internal standards and exceptions applied to different users or categories

When platforms refuse discovery, they effectively make good faith unfalsifiable. A user suspended for “harassment” cannot know if the standard is consistently applied because the platform won’t provide the data. This is not moderation. This is power without accountability.

The Organizing Problem

Vague policies create particularly acute problems for political organizing and boycott campaigns. Consider the structure of a typical harassment policy:

“Do not persistently target or create content designed to humiliate or degrade individuals. Do not engage in posts or responses made primarily to anger, provoke, or belittle.”

This language is indistinguishable from suppressing organized criticism. A sustained Amazon boycott campaign necessarily involves repeated criticism, “targeting” the company’s practices, and creating content meant to provoke response and moral reckoning. These are features of political organizing, not harassment.

Yet under current policy, a platform can suspend an organizer for “persistent targeting” without citing which specific posts crossed the line, what alternative framing would have been acceptable, or how many other accounts engaged in identical behavior without consequences. The organizer has no basis to appeal or to warn others. The rule becomes whatever the platform decides it is.

II. The Legal Framework: Reversing the Burden

A. The Invertible Burden

The centerpiece of this framework is simple: shift from “prove bad faith” to “prove good faith.”

Rather than requiring challengers to demonstrate that a platform acted with intent to censor, require platforms claiming Section 230 immunity to affirmatively demonstrate that their enforcement decisions complied with stated policy and were applied consistently. If they cannot produce that evidence, immunity is forfeited for that action.

This is not radical. It is how professional accountability works everywhere else:

Police must document the specific conduct that led to arrest

Employers must document performance issues before termination

Banks must document violations of policy before account closure

Schools must document disciplinary decisions with specific incidents

Platforms alone claim exemption from this basic requirement.

B. The Statutory Foundation

Section 230(c)(2) explicitly conditions immunity on good faith. The statute is already on the books. What is missing is enforcement of that condition. Proposed statutory language:

No provider of an interactive computer service shall be exempt from liability for content moderation decisions under Section 230(c)(2) unless the provider can demonstrate, upon request, that any account restriction, suspension, or removal of content was (i) made in accordance with a publicly posted policy; (ii) accompanied by specific identification of the content at issue, with timestamp and full text; (iii) accompanied by citation to the specific policy clause violated; (iv) accompanied by reasoning for why the identified conduct violated the stated clause; and (v) documented in a manner that demonstrates consistency with comparable enforcement decisions.

The burden shifts. Platforms retain immunity if they can show their work. They lose it if they cannot or will not.

C. Litigation Theories

1. Due Process (State Action)

If digital platforms function as the primary forum for public speech—which they demonstrably do—state action doctrine applies. Users are entitled to notice and an opportunity to be heard about what they allegedly did wrong before their speech is suppressed.

“Notice” cannot mean “your account was suspended.” It means “here is the specific post you made, here is the rule it violated, and here is why we believe it crossed the line.” Anything less is arbitrary and cannot survive 14th Amendment scrutiny if state action is found.

2. Breach of Contract

Users agree to Terms of Service in exchange for access to the platform. Terms of Service specify content policies. If platforms violate those policies arbitrarily, they breach the contract. They cannot rely on vague policy language to disclaim liability if enforcement is inconsistent with how the policy is written.

A platform cannot simultaneously claim “we have clear standards” (in the TOS) and “we have discretion to apply them however we want” (in enforcement). One or the other gives way.

3. Consumer Protection

Platforms represent they moderate “in good faith” and “fairly.” If enforcement is arbitrary and opaque, that is deceptive. State attorneys general have broad authority to pursue unfair or deceptive practice claims. The absence of specific citation to policy violations is evidence of deception.

4. Antitrust (Leveraged Foreclosure)

If platforms use vague harassment rules to suppress organizing specifically targeted at their business practices, they are using their monopoly power over distribution to foreclose competition and protect themselves from accountability. This is leveraged foreclosure under Sherman Act Section 2.

This theory is weaker than the others but worth including, particularly if discovery reveals patterns of enforcement that track attempts to organize against the platform itself.

III. Implementation: Legislative and Litigation Pathways

A. Legislative Strategy

1. Bipartisan Foundation

This proposal has surprising bipartisan appeal:

Conservative critics: “Platforms suppress conservative speech without proof”

Progressive critics: “Platforms suppress organizing and marginalized voices without transparency”

Both: “We don’t know what standards are actually being applied”

This is not a partisan demand for less moderation or more. It is a demand for accountability in moderation that happens. Congress can move on this.

2. Legislative Language

A standalone bill could be titled the “Platform Accountability and Transparency Act” (PATA):

Key Provisions:

Platforms claiming Section 230 immunity must maintain records of every enforcement action, including: specific content, policy basis, reasoning, severity level, and appeals outcome.

Upon user request, platforms must provide written notice of enforcement actions specifying the above within 30 days.

Users have the right to appeal enforcement decisions. Platforms must respond with written reasoning within 60 days. Failure to respond results in automatic reversal.

Platforms must publish quarterly transparency reports showing enforcement patterns by policy category, appeals granted/denied, and ratios of content labeled vs. removed.

Platforms failing to comply forfeit Section 230 immunity for that enforcement action, making them liable as publishers for the content in question.

B. Litigation Strategy

1. Class Action Viability

A class action is viable if you can show:

Numerosity: Many users suspended without specific citation of policy violations

Commonality: All class members were denied notice of what they allegedly did wrong

Typicality: Your claims are representative of the class

Adequacy: Your interests align with the class, not adverse

The theoretical claim might be breach of contract or consumer protection. Damages are reputational harm, lost reach, emotional distress, and in some cases lost income if the user was using the platform for commerce.

2. Individual Action (Pre-Litigation)

Before litigation, file a formal request with the platform demanding:

Detailed written explanation of the enforcement action

Specific identification of the offending content

Citation to the policy clause violated

Data on comparable enforcement of that policy

Reversal or binding third-party arbitration

Document the platform’s refusal. This becomes evidence of bad faith. It also creates a paper trail for potential class certification.

3. Discovery Value

Even if you don’t win, discovery produces value. You will gain access to:

Internal enforcement guidelines and criteria

Data on enforcement patterns and inconsistencies

Communications showing platform awareness of unfair application

Evidence of political bias or selective enforcement

This can inform legislative advocacy, regulatory complaints, and future litigation.

IV. Specific Applications to Current Dynamics

A. Amazon KDP and AI Training

Authors pulling books from Amazon KDP over AI training concerns have a transparency gap: Amazon refuses to state whether it is using KDP content to train AI, whether it offers opt-outs, or whether it compensates authors.

A demand for good faith enforcement here means Amazon must disclose: (1) what its actual AI training policy is, (2) whether authors were notified, (3) whether opt-out mechanisms exist, and (4) what enforcement mechanisms exist for authors who don’t consent.

Absent this, the platform cannot claim good faith in its moderation of the platform itself.

B. Boycott Organizing on Bluesky and Meta

Bluesky recently suspended horror writer Gretchen Felker-Martin over comments about a political figure, later claiming it was part of a broader enforcement crackdown. But the platform:

Did not cite the specific posts triggering suspension

Did not explain how her comments differed from untouched posts by other users

Did not provide comparative data on enforcement of its harassment policy

Under the proposed framework, Bluesky’s lack of specificity is itself evidence of bad faith. The suspension cannot be upheld.

C. Apartment Building Sticker Campaigns

The physical world has a structural advantage: stickers in shared apartment spaces don’t depend on platform algorithms. But if organizing moves online to coordinate sticker placement or discuss Amazon boycotts, platform accountability matters.

If someone is suspended for organizing through stickers, they should be able to demand: what specific post violated what specific policy? If platforms cannot answer, immunity is forfeited.

V. Counterarguments and Responses

A. “This Violates Free Moderation Rights”

Platforms will argue they have the right to moderate as they see fit. But this conflates two separate things:

The right to moderate (yes, they have this)

The right to immunity from liability for arbitrary moderation (no, they should not have this)

They can keep the first and lose the second. They can moderate however they want, but if they do it without transparency and consistency, they lose Section 230 protection and become liable as publishers.

B. “This Will Paralyze Moderation”

Platforms will argue that requiring documented enforcement will slow moderation and create legal exposure. But this is how professional organizations work. Courts can issue expedited discovery orders. Platforms can build systems to track and document decisions (many already do internally).

If documentation would “paralyze” moderation, that is because moderation is currently arbitrary. Making it consistent and documented will not paralyze it. It will constrain it to its legitimate scope.

C. “This Requires AI and Scale I Don’t Have”

Smaller platforms might argue compliance is prohibitively expensive. But scale is not an excuse for opacity. Smaller platforms can implement simpler systems: specific citation of content and policy, manual review, documented appeals.

If a platform cannot document why it suspended someone, that is a sign its moderation is not ready for public use. The burden should be on platforms to build accountability into their infrastructure, not on users to accept arbitrary enforcement.

D. “Privacy and Safety Require Opaque Enforcement”

Platforms will argue that transparency about enforcement enables bad actors to evade moderation. This is overstated. You can disclose enforcement decisions to the affected user without publishing them broadly. You can cite policy violations without revealing your detection methods.

Moreover, safety arguments are often post hoc justifications. Platforms don’t cite privacy concerns when disclosing user data to advertisers or law enforcement.

E. The First Amendment Defense: Why It Fails

The most sophisticated platform defense will be constitutional: requiring documented enforcement violates their First Amendment right to editorial discretion. This argument is compelling but ultimately conflates two distinct legal questions.

1. Editorial Discretion vs. Contractual Obligation

Platforms have a First Amendment right to decide what content to host. But they have no First Amendment right to lie about how they make those decisions.

The proposed framework does not restrict what platforms can moderate. It requires transparency about what they actually did moderate, measured against their own stated rules.

This is a contract issue, not a speech issue. Platforms promised their users specific enforcement standards. Users relied on those promises. When platforms violate them, that is breach of contract—not censorship of the platform.

2. The Employment Law Analogy

An employer has the absolute First Amendment right to fire someone. The employer’s speech is protected. But the employer cannot simultaneously claim “at-will employment” (unlimited discretion) while also publishing an employee handbook promising specific grounds for termination.

If an employer fires someone for retaliation, then claims “First Amendment editorial discretion,” the court looks to the employee handbook, compares it to what actually happened, and holds the employer accountable. The employer cannot hide documentation and claim free speech.

Platforms are in the same position. They published Community Guidelines. They promised fair enforcement. Users relied on those promises. They cannot hide documentation and claim the First Amendment.

3. The DMCA Precedent

The Digital Millennium Copyright Act (DMCA) has required ISPs and platforms to respond to takedown notices with documented reasoning since 1998. This system requires platforms to:

Acknowledge receipt of the notice

Identify the specific content removed

Cite the policy justification

Accept counter-notices with documented reasoning

The DMCA has operated for 28 years without “breaking the internet.” Platforms have not argued that documenting copyright removals violates their First Amendment rights. Yet the proposed framework is structurally identical.

If platforms can document why they removed copyrighted content, they can document why they removed content for “harassment.” The difference is that copyright is clearer law, so platforms chose compliance. With content moderation, they chose opacity. That choice was not required by the First Amendment. It was required by profit.

4. The Consumer Protection Reframe

The strongest legal position avoids direct First Amendment confrontation by focusing on what platforms actually promised vs. what they actually did.

Claim: Platforms violated their Terms of Service and engaged in deceptive practices. Not: “The First Amendment requires you to justify moderation.” But: “You promised users you would apply specific standards. You didn’t. That is fraud.”

This frames the issue as contract and consumer protection, which are not speech issues at all. A platform can have absolute discretion over content AND be required to be honest about how it exercises that discretion.

State attorneys general have explicit authority over unfair and deceptive practices. They don’t need a new law. They can act under existing consumer protection statutes.

5. Why First Amendment Doctrine Supports This

Courts have consistently held that the First Amendment does not protect fraud, misrepresentation, or violation of contractual promises. In “New York Times v. Sullivan,” the Court protected vigorous speech about public figures. But it did not protect deliberate falsehoods.

Similarly, platforms can moderate as they wish, but they cannot simultaneously claim to follow rules they don’t follow. If their enforcement is arbitrary and they hide it, they are not speaking. They are deceiving.

The First Amendment protects platforms’ right to moderate. Consumer protection law protects users’ right to know whether platforms are telling the truth about how they moderate. These are compatible.

VI. The Contract and Consumer Protection Strategy: Avoiding the First Amendment Trap

As of 2026, the most viable legal pathway avoids direct First Amendment confrontation by reframing the issue as contract enforcement and consumer protection rather than as compelled speech or editorial control.

A. Why Contract is Stronger Than First Amendment

Platforms will inevitably argue: “Requiring us to cite specific posts and policies violates our First Amendment right to editorial discretion.” This defense sounds strong but is legally weak when reframed correctly.

The proposed framework does not tell platforms what content to host or remove. It tells platforms they must tell the truth about how they exercise their discretion. That is not a speech issue. That is a fraud issue.

A platform can have absolute discretion over content AND be required to be honest about how it exercises that discretion. These are not contradictory. They are complementary.

B. The Statutory Basis Already Exists

No new First Amendment law is needed. State consumer protection statutes already authorize action against unfair and deceptive practices. As of January 2026, California, Virginia, and Indiana have implemented laws requiring social media companies to be transparent about account cancellations and content moderation.

The legal theory: Platforms represent (in their Terms of Service) that they enforce specific policies consistently. Users rely on this representation. If enforcement is arbitrary and inconsistent, that is deceptive. State AGs can pursue this immediately under existing law.

This sidesteps Section 230 entirely. Section 230 shields platforms from liability for third-party content. It does not shield them from liability for lying about their own enforcement practices.

C. Breach of Contract: The Private Law Enforcement

Users agree to Terms of Service. Terms of Service specify policies. If platforms violate those policies arbitrarily, they breach the contract. A user suspended for “harassment” without citation of the specific post or explanation of how that post violated the stated policy has been breached.

Damages include:

Reputational harm (account suspended without cause)

Loss of access to the platform

Lost income (for creators and organizers dependent on the platform)

Emotional distress and social harm

Platforms will argue their TOS includes language like “we can terminate accounts for any reason.” But this cannot simultaneously mean “with no reason” and “with documented reason.” If the TOS promises specific grounds for enforcement, platforms must follow them.

A class action would allege: Platforms promised specific enforcement standards, violated them arbitrarily, and caused harm to users who relied on the promises. This is textbook breach of contract.

D. How This Avoids the First Amendment Problem

The contract and consumer protection approach never asks a court to regulate platform speech or editorial discretion. It only asks courts to:

Compare what platforms promised (in TOS and Community Guidelines)

With what platforms actually did (in enforcement decisions)

And hold them accountable for the gap

This is not speech regulation. It is contract enforcement. The First Amendment has no application.

Courts enforce contracts between private parties all the time without implicating the First Amendment. Platforms are not exempt from this baseline principle.

F. Compliance Checklist: What Platforms Must Do

The requirements can be simplified into a straightforward checklist. Platforms claiming Section 230 immunity must satisfy ALL of the following for every enforcement action:

Required Actions:

Identify the specific content: Full text or link to the exact post/comment, with timestamp

Cite the policy clause: Which Community Guideline, Terms of Service provision, or rule was violated

Explain the violation: How the identified content meets the definition in the cited policy

Show consistency: Data demonstrating that comparable content by other users received comparable enforcement

Document the appeal: Provide written notice within 30 days of enforcement; accept appeals; respond with written reasoning

Failure to Comply:

Missing #1 (specific content): Immunity forfeited; platform liable as publisher

Missing #2 (policy citation): Immunity forfeited; enforcement presumed arbitrary

Missing #3 (explanation): Enforcement unenforceable; user may appeal directly to court

Missing #4 (consistency data): Immunity forfeited; enforcement presumed discriminatory

Missing #5 (documented appeal): Automatic reversal of enforcement action

This checklist is simple enough for lawmakers to codify, clear enough for judges to apply, and achievable enough that no platform can claim technical impossibility.

G. Concrete Hypothetical: Making the Distinction Intuitive

Here is a real-world scenario that illustrates why this is contract/fraud law, not free speech law:

The Scenario

A user posts a thread organizing an Amazon boycott, using the hashtag #AmazonBoycott and tagging Amazon’s corporate account to draw attention to its labor practices. The post names specific Amazon executives and their policy decisions. It is forceful, repeated, and clearly designed to provoke response and moral reckoning.

The platform suspends the account for “harassment.” When the user requests details, the platform responds: “Your account violated our policy against persistent targeting. We will not provide further information.”

Why This Is Contract/Fraud, Not Censorship

The platform’s Community Guidelines say: “We prohibit coordinated harassment campaigns designed to intimidate or humiliate individuals.” The guideline is about harassment—conduct designed to harm the target personally.

The user’s post is political organizing. It targets corporate behavior, not an individual’s personal characteristics. It contains factual criticism, not humiliation. This does not match the stated policy.

The platform promised users that “persistent targeting” means something specific. The user relied on that promise. The user organized their speech based on the understanding that political campaigns are not harassment. The platform then applied an inconsistent definition and refused to explain.

This is breach of contract. The platform promised Rule X, applied Rule Y, and hid the difference.

Why First Amendment Does Not Protect the Platform

The platform will argue: “We have First Amendment right to decide what content to host. Requiring us to explain why we removed content is compelled speech.”

But the court will recognize: No one is forcing the platform to say anything. The user is not demanding the platform publish the post. The user is demanding the platform tell the truth about why it unpublished the post. That is fraud enforcement, not speech regulation.

The platform can still remove the post. It just cannot remove the post, refuse to explain, hide the criteria it applied, and claim the First Amendment lets it do so.

That is the fundamental distinction. Moderation is protected. Lying about moderation is not.

State attorneys general do not need to wait for legislative action or win lawsuits. They can file enforcement actions now under existing consumer protection statutes.

The theory: Platforms engage in unfair and deceptive practices by:

Representing specific enforcement standards in their TOS

Failing to disclose that enforcement is actually arbitrary

Refusing to provide documentation of enforcement decisions

Causing financial and reputational harm to users who relied on the misrepresentations

This requires no new law. States like California already have statutes defining unfair and deceptive practices. Filing a complaint is relatively fast compared to litigation.

Settlement might include: transparency reports, documented enforcement procedures, appeals processes, and in some cases refunds or credits to harmed users.

VII. Implementation Playbook: Three Audiences, Three Strategies

A. For State Attorneys General: Move Now, No New Law Required

State AGs have enforcement authority today under existing consumer protection statutes (California Unfair Competition Law, Virginia Consumer Protection Act, Indiana Deceptive Trade Practices Act, etc.). No federal law is needed.

Step 1: Issue a Civil Investigative Demand (CID)

Request all enforcement data for the past 24 months: content removed, policy cited, reasoning given, appeal outcomes

Request internal guidelines showing how “harassment,” “targeting,” and “provocation” are actually defined in practice

Request examples of comparable enforcement decisions showing consistency

Step 2: Analyze for Deceptive Practices

If enforcement data shows: (a) vague policies applied inconsistently, (b) suspensions without specific citation, (c) refusal to provide appeal reasoning—this is prima facie evidence of deception

The deception: Users rely on published Community Guidelines, but actual enforcement differs in undisclosed ways

Step 3: Negotiated Settlement

Require platforms to implement the 5-point compliance checklist

Require quarterly transparency reports showing enforcement patterns

Require documented appeal processes with written responses

Seek civil penalty or fund for harmed users

Timeline: CID → 30 days response → negotiation → settlement within 90-180 days. This is faster than litigation and requires no congressional approval.

B. For Private Litigators: Filing a Class Action

A class action complaint should allege three counts: breach of contract, deceptive practices, and unjust enrichment.

Count I: Breach of Contract

Allegation: Users agreed to ToS containing specific enforcement standards. Platform violated those standards arbitrarily.

Damages: Lost access to account, reputational harm, lost income (for creators).

Count II: Unfair/Deceptive Practices

Allegation: Platform represented in ToS and Community Guidelines that enforcement follows specific standards. Enforcement actually differs. Platform refuses to disclose the difference.

Damages: Economic harm from reliance on misrepresentation.

Count III: Unjust Enrichment

Allegation: Platform profited from user-generated content and data while breaching duties to enforce promised policies consistently.

Damages: Disgorgement of profits or restitution.

Discovery Priorities

Internal enforcement guidelines: How are “harassment” and “targeting” actually defined?

Training materials for moderators: What instructions are they given?

Enforcement data by policy category: Show variance and inconsistency

Communications about boycott/organizing: Were specific guidelines issued around political campaigns?

Comparative analysis: How are suspensions handled for verified users vs. unverified?

Section 230 motion to dismiss will be filed. Counter-argument: This is breach of contract and fraud, not publisher liability. DMCA precedent shows documentation does not implicate Section 230.

C. For Legislators: The Legislative Language

The Platform Accountability and Transparency Act (PATA) should contain the following elements:

Necessary Provisions:

Condition Section 230(c)(2) immunity on documented enforcement: Platforms claiming immunity must provide specific identification of content, policy citation, and reasoning

Loss of immunity for non-compliance: Failure to document forfeits Section 230 protection for that enforcement action

Appeal rights: Users have 30-day window to appeal; platforms must respond in writing within 60 days

Transparency reporting: Quarterly disclosure of enforcement by category, appeals granted/denied, and consistency metrics

Nice to Have:

Third-party audits of enforcement practices

User compensation fund for wrongful suspensions

Criminal penalties for platform executives who knowingly conceal enforcement criteria

Lead sponsors should emphasize: This does not regulate what platforms moderate. It requires platforms to be honest about how they moderate. Bipartisan support exists (conservatives concerned about shadow censorship, progressives concerned about selective suppression of organizing).

VIII. Conclusion: The Path Forward

Digital platforms have become the central infrastructure for public speech. Yet they remain unaccountable in ways no other powerful institution tolerates. The solution is not to eliminate moderation or to ban platforms. It is to require that moderation be transparent and consistent.

Section 230 already requires good faith. What is missing is enforcement. But the strongest enforcement pathway is not through Section 230 itself. It is through contract and consumer protection law, which platforms cannot hide behind the First Amendment.

A Broad Coalition is Ready

This framework has natural allies:

Content creators and small businesses: Suspended without explanation; transparent enforcement protects them

Political organizers and activists: Platforms suppress boycotts and campaigns; transparency exposes it

Smaller platforms: Already implement documented moderation; larger platforms’ opacity is unfair competition

Civil libertarians (both left and right): Want to know what enforcement actually looks like

Journalists and researchers: Transparency enables accountability reporting

This framework has multiple parallel tracks:

Administrative (Fastest): State AGs file enforcement actions under consumer protection statutes alleging deceptive practices. No new law needed. Can begin immediately.

Litigation (Immediate): Users file class actions for breach of contract and deceptive practices. Contract claims avoid First Amendment defenses. Discovery forces disclosure of internal enforcement guidelines.

Legislative (Long-term): PATA and similar bills condition Section 230 immunity on documented good faith. Bipartisan support exists. Courts are already moving toward design-based negligence as a bypass to Section 230.

The First Amendment is not a defense to fraud. Platforms can moderate as they wish. But they cannot lie about how they moderate, and they cannot hide documentation of their enforcement decisions.

The DMCA proves this works. For 28 years, platforms have documented copyright removals without “breaking the internet.” They can document harassment removals with equal ease. The only reason they don’t is because opacity protects arbitrary enforcement that suppresses organizing and criticism.

Point to the post. Cite the policy. Show your work.

Or forfeit immunity and face liability as the publisher you have chosen to be.

No comments:

Post a Comment