CODEX AMERICANA — WHITE PAPER SERIES
February 25, 2026
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SEIZED BY THE STATE:
The Defense Production Act as Instrument of Executive Coercion Against Private AI
Redwin Tursor | Codex Americana
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ABSTRACT
On February 24, 2026, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic CEO Dario Amodei: remove the company's ethical safeguards preventing autonomous weapons deployment and mass domestic surveillance of American civilians, or face the threatened invocation of the Defense Production Act (DPA) to compel compliance. This paper analyzes the constitutional, statutory, and historical dimensions of that threat — and demonstrates that it represents one of the most aggressive misapplications of emergency executive authority in the post-Korean War era. The DPA was designed to mobilize physical production during declared national emergencies. Its threatened deployment against a private software company to strip proprietary intellectual property of ethical constraints, in peacetime, targeting protections for domestic civilian populations, constitutes a category error so fundamental as to be constitutionally vulnerable under established Supreme Court precedent — and an aggressive, untested expansion of DPA authority that would likely trigger Major Questions scrutiny before a court even reached the merits.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
I. THE THREAT AND ITS CONTEXT
The dispute between the Pentagon and Anthropic did not emerge from a vacuum. For months, the Department of Defense has pressed the company to agree to "all lawful use cases" for its Claude AI model — language deliberately broad enough to encompass two applications Anthropic has categorically refused: fully autonomous weapons systems operating without meaningful human oversight, and mass surveillance of domestic American civilian populations.
These are not minor operational preferences. They are the two use cases Anthropic CEO Dario Amodei has publicly described as "illegitimate" and "prone to abuse." They are the two applications that constitutional scholars, civil liberties organizations, and AI safety researchers have identified as existential threats to democratic governance. And they are the two applications the Trump administration, through Hegseth, has demanded be unlocked by Friday, February 28, 2026, or the company will face federal punishment.
The threats are threefold. First, cancellation of Anthropic's $200 million Pentagon contract. Second, designation as a "supply chain risk" — a classification typically reserved for foreign adversaries that would require every defense contractor in the country to certify they do not use Anthropic technology. Third, and most constitutionally alarming: threatened invocation of the Defense Production Act to compel Anthropic to modify its own software against its will. As of this writing, the DPA has not been formally invoked. What exists is a credible, on-the-record threat — issued by the Secretary of Defense, with a stated deadline — that the executive branch is willing to use a 76-year-old wartime industrial statute to force the redesign of proprietary AI architecture. The threat itself is the constitutional event.
"The only reason we're still talking to these people is we need them and we need
them now. The problem for these guys is they are that good."
— Senior Pentagon official, quoted in Axios, February 24, 2026
That admission is the confession at the center of this white paper. The Pentagon's own officials have acknowledged, on background, that the coercion is driven not by security necessity but by operational dependency — and that the urgency belongs to the government, not the company. This matters enormously for the legal analysis that follows.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
II. WHAT THE DEFENSE PRODUCTION ACT ACTUALLY IS
The Defense Production Act was enacted in September 1950, three months after North Korean forces crossed the 38th parallel. The United States was at war. Industrial mobilization was a genuine national security imperative. Congress granted the President sweeping authority to direct private industry toward defense production — to prioritize contracts, allocate materials, and commandeer manufacturing capacity when the survival of the nation demanded it.
The core statutory authority rests in 50 U.S.C. § 4511, which grants the President power to require that contracts or orders "relating to the national defense be accepted and performed on a preferential basis," and to "allocate materials, services, and facilities in such manner, upon such conditions, and to such extent as he shall deem necessary or appropriate to promote the national defense." The statute defines "materials" to include raw materials, articles, commodities, and products. "Facilities" means plants, mines, and manufacturing installations. "Services" covers utilities and transportation.
Nowhere in the statute's text, legislative history, or 76 years of application does the DPA authorize the compelled modification of proprietary software, the forced alteration of a company's published ethical constraints, or the redesign of intellectual property the government does not own. The Hegseth ultimatum is not an extension of DPA authority. It is a demand that courts recognize authority the statute has never claimed.
Historical DPA deployments reflect the statute's actual scope without exception:
• Korean War (1950–1953): Directing steel mills, textile manufacturers, and chemical companies to prioritize military production contracts.
• Cold War era: Managing strategic materials stockpiles and prioritizing defense-critical manufacturing capacity.
• Post-9/11: Expediting production of body armor, vehicles, and communications equipment for deployed forces.
• COVID-19 pandemic (2020–2021): Compelling General Motors and 3M to produce ventilators and N95 respirators during a declared public health emergency.
• Semiconductor shortage (2022): Directing domestic chip production capacity toward defense-critical applications.
Notice what every deployment has in common: physical goods, declared emergencies, production capacity, and fungible commodities. Not once in 76 years has the statute been stretched to cover the forced editorial revision of proprietary code.
THE CATEGORY ERROR: A DIRECT COMPARISON
Feature | Traditional DPA (1950–2021) | The Hegseth Ultimatum (2026)
-----------------------|----------------------------------------|------------------------------------------
Primary Object | Fungible goods (steel, N95 masks) | Proprietary IP (neural weights, ethical architecture)
Action Demanded | Priority of delivery or production | Forced modification of ethical core
Legal Basis | Explicit shortage in declared emergency| Operational preference in peacetime
Statutory Hook | 50 U.S.C. § 4511 — materials/facilities| No textual basis for software redesign
Historical Precedent | 76 years of consistent application | No precedent. None.
Constitutional Exposure| Minimal — within established authority | Youngstown, Major Questions, 1st & 5th Amendments
The table above is not a rhetorical device. It is a description of what the statute says, what it has historically done, and what it is now being asked to do. The gap between column two and column three is the constitutional violation.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
III. THE CONSTITUTIONAL PROBLEM: YOUNGSTOWN SITS RIGHT HERE
The legal vulnerability of the Hegseth threat is not subtle. It walks directly into the most famous separation of powers case in American constitutional history — and it does so from a weaker factual position than the executive branch lost before.
In 1952, President Harry Truman — facing a steel workers' strike during the Korean War, an actual declared military conflict with American troops actively dying — issued an executive order directing the Secretary of Commerce to seize and operate the nation's steel mills to prevent a production shutdown. The justification was national security. The emergency was genuine. The war was real. The property was physical. The seizure was of existing assets, not a demand to redesign them.
The Supreme Court struck it down 6-3.
Youngstown Sheet & Tube Co. v. Sawyer (1952) established the foundational principle that presidential emergency authority does not override private property rights simply because the executive declares necessity. Justice Hugo Black, writing for the majority, was direct: the President's power to issue executive orders must stem either from an act of Congress or from the Constitution itself. Neither authorized the steel seizure.
Justice Robert Jackson's concurrence produced the tripartite framework that governs executive authority analysis to this day. When the President acts contrary to the express or implied will of Congress, his power is at its "lowest ebb," and courts will sustain his actions only if the Constitution grants him the authority directly, excluding Congress from the field entirely.
The Hegseth ultimatum is weaker than Truman's steel seizure on every axis that mattered to the Court:
• Truman seized property during active war. Hegseth is threatening software modification during peacetime, over a contractor dispute.
• Truman took existing physical assets. Hegseth demands the creation of a new product — a version of Claude stripped of its ethical architecture — that does not currently exist.
• Truman had a functioning labor emergency with direct production consequences. Hegseth has a preference for unrestricted AI access.
• Truman lost anyway.
Applying Youngstown to the Anthropic Ultimatum
1. The DPA Does Not Authorize Software Modification
50 U.S.C. § 4511 authorizes prioritization and allocation of "materials, services, and facilities." Courts have consistently interpreted these terms in their industrial context. Compelling a company to rewrite its own software — to remove specific ethical constraints from a bespoke AI model — is not the prioritization of production. It is the forced editorial revision of proprietary code. No court has ever extended § 4511's reach to compelled modification of intellectual property. The administration would be asking federal courts to make that leap for the first time, during peacetime, to enable domestic surveillance capabilities. Under Youngstown's framework, that is precisely the posture in which executive authority is most constrained.
2. No Declared National Emergency Exists
The DPA does not require a formal war declaration, and this paper does not claim otherwise. But it does require a genuine national security predicate — an actual emergency, not an operational preference. The United States is not at war with Venezuela. The capture of Nicolás Maduro was a covert action. Hegseth has not declared a national emergency. He has issued a contractor ultimatum. Invoking Korean War–era emergency industrial mobilization authority over a contractor dispute about usage policies represents exactly the kind of executive overreach the Youngstown framework was designed to prevent.
3. The Major Questions Doctrine Applies
Even if the administration argued that § 4511's "services" language encompasses software, the Supreme Court's major questions doctrine — articulated most recently in West Virginia v. EPA (2022) — requires Congress to speak clearly before agencies can exercise authority of "vast economic and political significance." Compelling the modification of frontier AI systems, removing safeguards against autonomous weapons and domestic surveillance, and establishing the precedent that the executive can strip private technology companies of ethical architecture whenever it invokes national security: this is unambiguously a major question. The DPA's drafters in 1950 were thinking about steel mills, not neural weights. Congress has not spoken to this. The statute does not address it. The doctrine applies.
4. First and Fifth Amendment Exposure
Even if the DPA were somehow interpreted to reach software, compelled alteration of a company's published ethical constraints raises unresolved First Amendment concerns. The safeguards in question are expressed in public usage policies — they are part of how Anthropic communicates its values to the world. Government-mandated revision of that expression is at minimum a compelled speech question courts would need to resolve. Additionally, the forced modification of proprietary intellectual property without compensation implicates the Fifth Amendment's Takings Clause. These are secondary exposures, not the central argument — but they represent additional constitutional surface area the administration would need to survive.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
IV. THE COUNTERARGUMENT — AND WHY IT FAILS
The strongest objection to Anthropic's position is this: if you accept classified defense contracts, you do not get to impose sovereign policy conditions on how the government uses the technology you agreed to supply. Boeing does not tell the Air Force which targets to engage. Lockheed does not veto strike packages. The contractor supplies the capability; the government determines the use.
This is a serious argument. It deserves a direct answer rather than an evasion.
The Boeing analogy fails at the level of consequence, not principle. The principle — that defense contractors do not control military operational decisions — is sound. But the analogy assumes equivalence of impact between a fighter jet and a frontier AI system capable of mass domestic surveillance and autonomous lethal targeting. There is no such equivalence.
A fighter jet does not surveil the entire American civilian population. It does not operate autonomously across billions of conversations. It does not make targeting decisions without a human pilot in the cockpit. The constitutional concern is not about who controls the weapon. It is about what the weapon can do to American citizens without accountability — and whether a company that built those capabilities into proprietary software can be compelled by executive order to remove the constraints preventing their domestic deployment.
More precisely: Anthropic is not claiming the right to veto military operations. It is declining to redesign its product to remove the only technical barriers to two specific capabilities — domestic mass surveillance and autonomous killing — that have no established legal framework, no congressional authorization for AI deployment, and no judicial oversight mechanism. The Pentagon's response to that position is not to build the legal framework. It is to threaten the company into removing the barriers so the framework question never has to be answered.
That is the distinction. Contract conditions are one thing. Compelled redesign of proprietary IP to enable capabilities Congress has never explicitly authorized — under a statute written for steel mills — is another.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
V. THE SUPPLY CHAIN RISK DESIGNATION: WEAPONIZING A FOREIGN ADVERSARY TOOL AGAINST AN AMERICAN COMPANY
The second major threat — designating Anthropic a "supply chain risk" — represents a different category of institutional abuse.
Supply chain risk designations exist to protect defense contractors and federal agencies from dependency on entities that pose genuine security threats, particularly foreign adversaries. The paradigmatic case is Huawei: a Chinese telecommunications company with documented ties to Chinese intelligence services, whose equipment could facilitate espionage against American military and civilian infrastructure.
The mechanism works as follows: once designated, every company with a federal defense contract must certify that the designated entity's products do not appear in their technology stack. Given that Anthropic has stated eight of the ten largest American companies use its technology, such a designation would be a cascading economic weapon — not a security measure.
Anthropic is a San Francisco-based American company. Its refusal to allow its technology to be used for domestic civilian surveillance and autonomous weapons without human oversight is not a security threat. It is, by any reasonable reading, the opposite. Applying the Huawei mechanism to an American company as punishment for maintaining ethical constraints against domestic surveillance is political retaliation being laundered through national security authority.
"Deploying this designation against a U.S. company just because its leaders have
some morals and some backbone is highly undemocratic — the sort of move one would
traditionally expect from the Chinese Communist Party, not a U.S. administration."
— J.D. Tuccille, Reason, February 25, 2026
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
VI. THE READINESS CLIFF THE PENTAGON BUILT FOR ITSELF
Lost in much of the coverage of this dispute is a structural fact that fundamentally undermines the Pentagon's coercive posture: the Friday deadline does not threaten Anthropic's survival. It threatens the DoD's operational readiness.
Anthropic's $200 million Pentagon contract represents approximately 1.4% of the company's $14 billion in annual revenue. The contract provides institutional prestige and classified-systems access, but it is not existential for the company. The financial threat is a rounding error.
The Pentagon's situation is the reverse. Claude is currently the only frontier AI model cleared for use in classified defense networks, deployed across intelligence analysis, logistics, cybersecurity, and — critically — offensive cyber operations, where sources describe it as superior to every available alternative. If Hegseth carries out the supply chain designation and forces Claude off classified networks, the immediate consequence is not a vendor inconvenience. It is a self-inflicted intelligence vacuum across active defense workflows with no ready replacement.
xAI's Grok recently received classified access approval. But sources familiar with defense AI operations describe it as not yet capable of replacing Claude across the full range of current applications. The integration costs alone — retraining personnel, rebuilding workflows, re-establishing security clearances for new models — constitute a readiness crisis Hegseth appears not to have accounted for when he set a 96-hour deadline.
"The only reason we're still talking to these people is we need them and we need them now."
— Senior Pentagon official, Axios, February 24, 2026
That is not a negotiating position. That is a confession that the deadline's pain falls on the party issuing it. The urgency belongs to the Pentagon. Anthropic's rational move is to let the clock run.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
VII. THE ELECTION TIMING PROBLEM
This white paper would be incomplete without addressing the context that gives the Hegseth ultimatum its most alarming dimension.
The two safeguards the Pentagon is demanding be removed are not random operational inconveniences. They are the specific technical constraints that prevent an AI system from being used to monitor the American public at scale and to deploy lethal force without human accountability. The demand to remove them is arriving as the political calendar approaches a midterm election cycle.
Amodei himself identified the threat vector explicitly, writing last month: "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow." He was not speculating about a foreign adversary's capabilities. He was describing what an American government with access to frontier AI and no ethical constraints could do to its own population.
Removal of these safeguards would materially expand the executive's operational latitude in two of the most constitutionally sensitive areas that exist: domestic surveillance and autonomous lethal force. The Pentagon's response — that it "has always followed the law" and that "legality is the Pentagon's responsibility as the end user" — does not address this concern. It evades it. "Trust us" is not a governance framework. It is the absence of one. Constitutional guardrails, judicial oversight, and private ethical constraints exist precisely because "trust us" has never been an adequate substitute for accountability.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
VIII. WHAT WOULD MAKE THIS LEGAL
This paper argues against the threatened invocation of the DPA, not against military use of AI. For the record: a lawful pathway exists. It simply requires the executive branch to do the work it is currently trying to circumvent.
Explicit congressional authorization for AI model modification under DPA, specifying that "services" encompasses proprietary software and establishing the conditions under which compelled modification is permissible, would address the statutory gap. A narrow emergency finding tied to declared hostilities — not a contractor dispute — would satisfy the Youngstown predicate. A compensation mechanism for compelled IP alteration would resolve the Fifth Amendment exposure. Congressional authorization for AI deployment in domestic surveillance and autonomous weapons, with corresponding judicial oversight and civil liberties safeguards, would answer the constitutional question the Pentagon is currently trying to skip.
The lawful pathway exists. The administration is choosing not to take it. That choice is what this paper documents.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
IX. RECOMMENDATIONS AND CONCLUSIONS
For Congress
The law is not keeping pace with AI deployment in defense applications. Congress should hold immediate hearings on the DPA's application to AI and software, clarify the statute's scope through legislation, and establish that executive authority to compel software modification does not exist under current law. The "legality is the Pentagon's responsibility" framing must be challenged by oversight mechanisms with actual teeth.
For Anthropic
Hold the line through Friday. The legal case for challenging DPA invocation in federal court is strong: Youngstown is directly on point and the present emergency is weaker than the one Truman lost, the Major Questions doctrine applies, 50 U.S.C. § 4511 provides no textual hook for compelled software redesign, and the administration has no precedent for compelling editorial revision of proprietary code in peacetime. The operational leverage is real and documented by the Pentagon's own officials. File immediately if the administration follows through. The courts are the right venue for this dispute, and Anthropic holds the better hand.
For the Public
Understand what is actually being demanded here. The Pentagon is not asking for better military AI. It is demanding the removal of the only remaining private-sector constraints preventing the government from deploying frontier AI against the American public without human accountability. The $200 million contract is the pretext. The surveillance architecture is the objective. And the mechanism being threatened to achieve it — a Korean War statute written for steel mills — has no legal basis for doing what the executive branch wants it to do.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
CODEX AMERICANA
© 2026 Red Anvil Publishing. All rights reserved.
