Objects That Struck the Data Center
Iran's nuclear program has been constrained for four decades. Its drones flew on American chips.
At 4:30 in the morning Pacific time on March 1, 2026, Amazon Web Services posted an incident report for its ME-CENTRAL-1 region in the United Arab Emirates. The language was remarkable — not for what it conveyed, but for what it refused to name. An availability zone had been “impacted by objects that struck the data center, creating sparks and fire.” Not missiles. Not drone fragments. Not the wreckage of an Iranian retaliatory strike that every news channel on earth was broadcasting in real time. Objects. The local fire department shut off power. Recovery, AWS noted, was “multiple hours away.” Thirteen hours later, the facility was still dark. The vocabulary had not improved. [1]
The objects were Iranian. Hours earlier, the Islamic Revolutionary Guard Corps had launched 165 ballistic missiles, two cruise missiles, and 541 drones at the UAE — retaliation for the joint US-Israeli assassination of Ayatollah Khamenei two days prior. UAE air defenses destroyed most of them. Thirty-five drones and five missiles did not. Three people died. Fifty-eight were injured, representing more than fifteen nationalities. And somewhere in the Emirates, a data center full of American servers caught fire from what AWS would only call “objects” — objects consistent with weapons built, in part, from American components. [2]
Iran has been the test case for technology nonproliferation for four decades. Its nuclear weapons program — the one the international community built an entire architecture to constrain — has not produced a deliverable weapon despite that entire period under nonproliferation scrutiny. The International Atomic Energy Agency conducts inspections. Sanctions target enrichment infrastructure. Military strikes destroyed key facilities in June 2025. The nonproliferation framework is imperfect: it did not prevent several countries from crossing the nuclear threshold. But even those failures required decades of effort from states with significant resources, and in Iran’s case, the combination of monitoring, sanctions, diplomacy, and ultimately military action has kept the program from reaching its goal. [3]
Iran’s drone program faced no comparable constraint. Ukrainian intelligence teardowns of Shahed-136 drones found that 77% of their components — 40 of 52 unique parts — came from thirteen American manufacturers, including Texas Instruments, Analog Devices, and Onsemi. [4] These are not controlled military goods. They are commercial semiconductors, GPS receivers, and microcontrollers, sold globally through civilian supply chains. No export license was required. No inspection regime flagged the purchases. No treaty governs their military end use.
The same country, the same week, two technologies: one that the world’s most sophisticated nonproliferation architecture has meaningfully constrained, and one that moved from American factories to Iranian weapons with nothing in between. The difference is not effort or intent. It is that nuclear material has the three properties governance requires — it is scarce, observable, and verifiable — and the technology in those drones lacks them.
This is the structural problem that every AI governance proposal must confront. The governance model we know how to build — inspection, restriction, physical verification — was designed for a world where dangerous technology requires observable infrastructure to produce. The technology that is actually proliferating requires a laptop and a download link.
What Moves
The Shahed components are the most visceral example, but they are not the most consequential. Four proliferation channels are now operating simultaneously, each beyond the reach of existing governance tools.
Open-source AI models. A Reuters investigation in October 2025 identified a dozen People’s Liberation Army tenders explicitly naming DeepSeek models, filed that year across multiple military entities. [5] The Jamestown Foundation documented applications spanning autonomous combat vehicles, drone swarm decision-making, battlefield scenario analysis, and satellite imagery processing. [6] DeepSeek is released under the MIT license — the most permissive open-source license available. No military exclusion. No geographic limitation. No field-of-use clause. As of March 2026, the base model has been downloaded 14.9 million times from Hugging Face alone. [7]
In June 2024, researchers at the PLA’s Academy of Military Science published a paper describing ChatBIT — a military chatbot fine-tuned from Meta’s Llama using approximately 100,000 military dialogue records. [8] Meta’s Llama license includes a prohibition on military use. The prohibition did not prevent the download. Could not prevent the fine-tuning. Cannot compel deletion.
The performance gap between open-weight and closed-weight models shrank from 8.04 points to 1.70 points between January 2024 and February 2025, measured on the LMSYS Chatbot Arena leaderboard. [9] Cost collapsed in parallel: achieving GPT-3.5-equivalent performance fell from $20 per million tokens in November 2022 to $0.07 by October 2024. [10] You cannot embargo a model that anyone can download, fine-tune on commodity hardware, and deploy without internet access. You cannot recall it after release. And every month, the model you cannot recall gets closer to the frontier you are trying to protect.
Commodity components. The Shahed drones are not anomalous — they are representative. Ukrainian intelligence has catalogued foreign-made parts in 95% of Russian missiles and drones recovered since the 2022 invasion, with 72% of those parts sourced from US manufacturers. [11] The Royal United Services Institute teardown of Russia’s Kh-101 cruise missile — the weapon that struck the Okhmatdyt children’s hospital in Kyiv in July 2024, killing a doctor and a civilian — found 31 foreign components, including Intel processors and AMD-owned Xilinx chips. [12] The Iskander-M ballistic missile, one of Russia’s advanced weapons, relies on an SN-99 satellite guidance unit containing Texas Instruments digital signal processors, Spansion flash memory, and Linear Technology analog-to-digital converters — all American. [13] As of December 2025, Ukraine had identified more than 5,000 distinct foreign components across nearly 200 Russian weapon systems. [14]
These are not frontier AI chips. They are the ordinary building blocks of modern weapons: microcontrollers, signal processors, GPS modules, and voltage regulators. They are manufactured by the billions, sold through civilian distributors, and untraceable after the first transaction. The enforcement perimeter is not leaking. It was never built.
Advanced AI chips and the captive market problem. The October 2022 semiconductor export controls — the most aggressive technology denial regime since the Cold War-era CoCom regime — cut Chinese AI labs off from Nvidia’s best GPUs and from the lithography equipment needed to manufacture equivalents. [15] Georgetown’s Center for Security and Emerging Technology concluded they “almost certainly slowed Huawei’s progress” on a per-chip basis. [16] A roughly 40% inference performance gap between Huawei’s Ascend 910C and Nvidia’s H100 — per vendor-affiliated benchmarks, not independent testing — is not trivial. But the controls also created a captive market for domestic alternatives. Before the controls, Chinese firms bought Nvidia because Nvidia was better. After, they bought Huawei because Nvidia was unavailable. Huawei’s original Ascend 910 had been fabricated by Taiwan’s TSMC — now inaccessible under US restrictions. Within three years, Huawei rebuilt on SMIC’s domestic process: the 910B began shipping in 2023, and the 910C followed in 2024. [17]
Even the controlled chips leak. The first conviction in an AI chip smuggling case came in December 2025: 3,872 H100 and 3,160 H200 GPUs, worth $160 million, relabeled under a fictitious brand and routed through Southeast Asia to China. [18] The conviction was a milestone. It was also an admission: the enforcement perimeter leaked for three years before producing a single criminal case. The pipeline from American chips to Chinese military systems has become explicit. In January 2026, the chairman of the House Select Committee on China disclosed that Nvidia had provided extensive technical support to DeepSeek — the same open-source model now integrated into PLA procurement systems — enabling it to extract frontier performance from export-controlled H800 chips. [19] Reuters identified 35 PLA patent applications referencing Nvidia A100 chips, including one filed as recently as June 2025 by the PLA Rocket Force for a remote-sensing target detection system. [20] The supply chain runs from Silicon Valley to Shenzhen to the People’s Liberation Army, and every link except the last is legal.
This is where the nuclear analogy breaks. When the nonproliferation regime restricted Iran’s access to enrichment centrifuges, Iran could not simply build equivalents — the physics of uranium enrichment and the precision manufacturing required create genuine bottlenecks that sanctions can exploit. When export controls restricted China’s access to Nvidia GPUs, China built alternatives within three years — because semiconductors, while difficult, are manufactured by processes that a sufficiently motivated state can replicate at a meaningful scale. The restriction generated the investment to overcome it.
Talent. When Hoover Institution researchers analyzed 211 DeepSeek researchers across five foundational papers, more than half were trained exclusively at Chinese institutions. Only 24% had any US institutional affiliation. [21] DeepSeek’s founder told 36Kr that the team behind the V2 model “doesn’t include anyone returning to China from overseas — they are all local.” [22] China’s domestic AI education pipeline has produced a workforce capable of frontier research without significant US training. The talent perimeter was not breached. It was bypassed.
Compare again to nuclear. Nuclear weapons expertise is rare, identifiable, and monitorable — the intelligence community can track centrifuge engineers and weapons physicists with meaningful confidence. AI expertise is broad, overlaps with civilian computer science, and is produced at scale by universities worldwide. You cannot build an export control regime for the knowledge that ten thousand graduates acquire every year.
Why Every Tool Fails the Same Way
Each proliferation channel is different — software, components, chips, talent — but each meets the same structural wall. The governance model that works for nuclear assumes three properties: the dangerous material is scarce, the production infrastructure is observable, and possession is verifiable. Nuclear material has all three. AI has none.
Open-source model weights are not scarce — they are infinitely copyable at zero marginal cost. Training runs are observable (cloud compute is trackable, in principle), but the resulting model can be copied, exfiltrated, and deployed on hardware that leaves no signature. Commercial semiconductor components are manufactured by the billions, distributed through global supply chains designed for efficiency rather than traceability, and their end use is unverifiable after the first sale.
The strongest candidate for AI governance is compute monitoring — the one point in the supply chain where the triad partially holds. Frontier training runs require massive GPU clusters that are scarce, observable through energy consumption and procurement records, and partially verifiable through hardware auditing. But the compute threshold keeps dropping — DeepSeek V3 trained for approximately $5.6 million in compute costs alone [23] — and once a model is trained, the resulting weights can be copied and deployed on commodity hardware beyond any monitoring perimeter. Compute governance might constrain the next training run. It cannot recall the last release.
The result is a pattern: every restriction that assumes scarcity creates the incentive structure to defeat it. Export controls on chips create captive markets. Military-use prohibitions in licenses lead to the discovery that the license is unenforceable. Component controls create third-country routing. And the restriction’s failure is not random — it is structural, because the restricted good can be replicated, rerouted, or replaced faster than the restriction can adapt.
Governance energy flows toward the targets it can see. Two days before the strike, Defense Secretary Pete Hegseth designated Anthropic — the AI company that most publicly maintains safety constraints — a “Supply-Chain Risk to National Security” for refusing the Pentagon’s demand that its models be available for “all lawful purposes.” [24] Within hours, OpenAI announced a classified Pentagon contract, accepting the framework Anthropic had refused. [25] The EU AI Act — the most comprehensive AI regulation on earth — classifies civilian chatbot risk with extraordinary precision while explicitly exempting military applications by constitutional design. [26] Helsing, a Franco-German defense AI company valued at €12 billion, is building autonomous weapons systems with Mistral in partnership — entirely within the EU, entirely outside the AI Act’s scope. [27]
These are not failures of will. They are the predictable behavior of a governance architecture optimized for cooperative, visible actors. Anthropic showed up to negotiate. The PLA did not. The EU can regulate companies that sell into European markets. It cannot regulate a model downloaded under the MIT license and fine-tuned in a military research institute. The tools work where the targets cooperate. The proliferation runs where they don’t.
What Would Work and Why It Won’t Happen
The Convention on Certain Conventional Weapons is scheduled for its seventh review conference in November 2026. The Group of Governmental Experts on Lethal Autonomous Weapons Systems has met annually since 2014, but has not produced a binding instrument. [28] The CCW operates by consensus. Russia, the United States, and Israel have each blocked binding restrictions.
The Chemical Weapons Convention comes closest to a model that could apply: near-universal membership, mandatory declarations, on-site inspections, and no military exemption. It took over two decades to negotiate. Even so, the Organisation for the Prohibition of Chemical Weapons confirmed Syria’s repeated use of chemical weapons between 2013 and 2018 but could not compel compliance when Russia blocked Security Council enforcement. [29] And chemical weapons, like nuclear material, require physical infrastructure to produce.
A framework that actually constrained AI and component proliferation would need to solve the verification problem for goods that are copyable, invisible after distribution, and produced by civilian infrastructure that is indistinguishable from military infrastructure. No major power has proposed anything approaching this — and the absence of a proposal is not mere diplomatic neglect. The structural properties of the governed technology explain why no framework has been attempted: you cannot inspect what you cannot see, and you cannot verify what can be copied infinitely. The most promising partial approaches — compute monitoring for frontier training, component tracing for semiconductor supply chains — each address one channel without addressing the structural problem. The history of arms control suggests that partial frameworks constrain the actors who would have cooperated anyway. They do not constrain the actors that the framework was built to reach.
For any reader evaluating a future AI governance proposal — a new summit, a new framework, a new treaty — one question is sufficient:
Does the proposal assume the governed technology is scarce, observable, and verifiable? If so, it was designed for nuclear use. It will not work for AI.
AWS will eventually restore power to mec1-az2. The incident report will close. The facility will be rebuilt or relocated. What will not change is the vocabulary. The next incident — wherever it happens, whatever strikes the infrastructure — will be reported in the same categories: objects, power, connectivity, recovery time. The template will not acquire a field for why.
That missing field is the structural problem, scaled from one company’s status page to the international governance architecture. For forty years, the institutions that govern dangerous technology have operated in categories inherited from nuclear: scarcity, observability, verifiability. Those categories constrained Iran’s nuclear program. They did not slow the drones by a single day. And every AI governance framework currently proposed is built on the same categories, applied to a technology that fits none of them. The incident report and the governance architecture share the same limitation. Both describe what they were built to see. Neither can name what is actually happening.
Notes
[1] AWS Health Dashboard incident report, ME-CENTRAL-1 region, March 1, 2026. Exact language per Reuters wire: “impacted by objects that struck the data center, creating sparks and fire.” Initial update at 4:51 AM PST; by 6:01 PM PST — more than thirteen hours later — AWS still had no estimated time for physical power restoration and was awaiting local authority clearance to restore power. AWS has not publicly attributed the incident to Iranian strikes; the temporal and geographic correlation with documented IRGC attacks on the UAE (February 28–March 1, 2026) is strong but circumstantial. AWS’s language choices throughout the incident — “objects,” “sparks and fire,” “localized power issue” — are consistent with corporate incident response protocols that avoid attributing causation, but the result is a description that strips military context from a military event.
[2] UAE Ministry of Defense figures, reported by The National (Abu Dhabi), March 1, 2026: 165 ballistic missiles, 2 cruise missiles, and 541 drones launched at the UAE across two waves (February 28 and March 1). 152 ballistic missiles and 506 drones destroyed by air defenses; 35 drones and 5 missiles penetrated. Three fatalities: one Pakistani, one Nepali, one Bangladeshi national. 58 injured across 15+ nationalities (the nationality count encompasses both fatalities and injuries). Cross-referenced with Al Jazeera, Gulf News, and Breaking Defense. The phrase “objects consistent with weapons” in the body text reflects the circumstantial nature of the attribution — AWS has not confirmed the cause, and the data center may have been struck by intercepted projectile debris rather than a direct hit.
[3] Iran’s nuclear program timeline: Iran ratified the NPT in 1970. The IAEA confirmed Iran’s covert weapons program (the AMAD Plan) ran from the late 1990s through 2003. Since the US withdrew from the JCPOA in 2018, Iran expanded enrichment to 60% U-235, accumulating enough material for multiple weapons if further enriched. By May 2025, the DIA estimated Iran needed “probably less than one week” to produce enough weapons-grade fissile material for a bomb; weaponization (building a deliverable warhead) would take additional months. In June 2025, Israeli and US military strikes severely damaged enrichment facilities at Natanz, Fordow, and Isfahan. The IAEA has described the setback as significant but has been unable to verify remaining capabilities since Iran suspended cooperation in July 2025. The core analytical point: over four decades, the combination of IAEA monitoring, sanctions, diplomatic frameworks, and ultimately military action has prevented Iran from fielding a nuclear weapon — a level of constraint that no comparable mechanism has imposed on Iran’s drone program, where American components moved freely from civilian supply chains to military platforms. The NPT framework did not prevent India (1974/1998), Pakistan (1998), Israel (undeclared, estimated 1960s-1970s), or North Korea (2006) from acquiring nuclear weapons — the regime constrained but did not universally block determined state actors, even for a technology with favorable governance properties. Sources: Congressional Research Service, “Iran’s Nuclear Program: Status,” updated 2025; IAEA Board of Governors reports; Arms Control Association, “Iran’s Nuclear Program After the Strikes,” July 2025.
[4] Ukrainian intelligence teardown of a Shahed-136 drone recovered in Ukraine, reported by CNN, January 4, 2023. Analysis identified 52 unique components, of which 40 (77%) were manufactured by 13 US companies, including Texas Instruments (nearly two dozen parts), Hemisphere GNSS, NXP USA, Analog Devices, and Onsemi. A separate Conflict Armament Research field dispatch (November 2022) examining multiple Iranian drone types found 82% of components from US-based firms across 70 manufacturers — the figures derive from different analyses and should not be conflated.
[5] Reuters investigation, approximately October 26–28, 2025: “Usage of DeepSeek models was indicated in a dozen tenders from PLA entities filed this year and seen by Reuters.” “This year” refers to 2025, the year of the Reuters investigation.
[6] Jamestown Foundation China Brief, Vol. 25, Issue 20, October 27, 2025, by Sunny Cheung and Kai-shing Lau: “DeepSeek Use in PRC Military and Public Security Systems.” Documents PLA procurement, including Norinco P60 autonomous combat vehicles, battlefield scenario analysis (10,000 scenarios in 48 seconds), satellite/drone imagery processing, and C4ISR integration.
[7] Hugging Face repository data for deepseek-ai/DeepSeek-R1 as of March 2026: 14.9 million downloads, MIT license. The MIT license permits use, copying, modification, merging, publishing, distribution, sublicensing, and sale with no field-of-use restrictions. The 14.9M figure covers only the base model; distilled variants have millions of additional downloads.
[8] PLA Academy of Military Science researchers, paper published June 2024, describing ChatBIT. Reuters (November 1, 2024) reported the model was fine-tuned using ~100,000 military dialogue records. The base model was most likely the original Llama 13B (released February 2023). Performance: the comparison model Vicuna-13B was rated at roughly 90% of GPT-4’s capability on independent benchmarks; Reuters reported ChatBIT outperformed models at that level. Reuters noted it “could not confirm ChatBIT’s capabilities.” Meta’s Llama Community License Agreement prohibits use “in connection with any weapons systems.” Enforcement mechanism: Meta can revoke the license but cannot technically prevent continued use of already-downloaded weights.
[9] Stanford HAI 2025 AI Index Report (8th edition, published April 7, 2025), Chapter 2: Technical Performance. Benchmark: LMSYS Chatbot Arena leaderboard (Elo-style ratings from human preference comparisons). The gap is measured as the normalized score difference between the leading closed-weight and the leading open-weight models. Data period: January 2024 to February 2025. The “points” are HAI’s normalized presentation of the Arena gap, not raw Elo ratings.
[10] Stanford HAI 2025 AI Index Report, Chapter 2. Iso-performance cost comparison: achieving GPT-3.5-equivalent quality cost $20/M tokens in November 2022 vs. $0.07/M tokens in October 2024. Precise ratio: approximately 285x.
[11] OCCRP/Novaya Gazeta Europe investigation, February 2025, cross-referenced with Ukrainian intelligence analysis. The 95% figure covers Russian missiles and drones recovered and analyzed by Ukrainian authorities; 72% of foreign-origin components traced to US manufacturers. At least 722 confirmed Ukrainian civilian casualties linked to Russian weapons containing Western parts. Corroborated by US Senate testimony (September 2024), where AMD, Analog Devices, Texas Instruments, and Intel executives acknowledged gaps in supply chain compliance. A Senate report concluded the companies had responded too slowly to prevent Russian military access.
[12] RUSI, “Silicon Lifeline: Western Electronics at the Heart of Russia’s War Machine,” August 2022, updated through subsequent field analyses. The Kh-101 teardown identified 31 foreign components, including Intel processors and AMD-owned Xilinx chips. The Kh-101 that struck Okhmatdyt children’s hospital on July 8, 2024, contained at least 16 US-made parts per Financial Times analysis (citing Office of the President of Ukraine). Production surged from 56 missiles in 2021 to 420 in 2023. NAKO/IPHR identified 24 components from Maxim Integrated and products from Cypress Semiconductor across multiple Kh-101 specimens.
[13] RUSI field analysis of recovered Iskander-M (9K720) cruise missile variant. The SN-99 satellite guidance unit — compatible with both GPS and GLONASS — was found in the Iskander-K, Kh-101, 3M-14 Kalibr, 9M544 cluster munition, and Kh-59 Ovod missile. Western components included: Spansion 32-megabit flash memory chip (Sunnyvale, CA); Linear Technology 12-bit analog-to-digital converter (Milpitas, CA); Texas Instruments digital signal processors; along with microprocessors, FPGAs, SRAM chips, and crystal oscillators from US, Dutch, and German firms. The Baget-62-04 terminal guidance system, used for precision targeting, also contained Western-sourced FPGAs and memory chips. Confirmed by Conflict Armament Research (CAR) independent field analysis, September 2022.
[14] Ukrainian intelligence systematic cataloguing: 5,000+ distinct foreign components across nearly 200 Russian weapon systems as of December 2025 (Lviv Herald, citing Ukrainian intelligence service data). In one October 2025 wave of attacks (549 drones and missiles), Ukraine identified over 100,000 foreign-made components sourced from companies in the US, UK, Germany, Switzerland, Japan, South Korea, the Netherlands, and Taiwan (President Zelenskyy’s office, October 6, 2025; corroborated by Al Jazeera, Army Technology). The IPHR/NAKO/Hunterbrook July 2025 investigation confirmed Western microelectronics remain present in SU-34 and SU-35 fighter jets used in attacks on civilian infrastructure, analyzing 60 aerial attacks between May 2023 and May 2024.
[15] Note on the CoCom comparison: CoCom (Coordinating Committee for Multilateral Export Controls) was a multilateral Cold War-era regime dissolved in 1994. The October 2022 semiconductor controls were initially unilateral US action; the Netherlands and Japan subsequently imposed complementary restrictions on lithography equipment exports in 2023.
[16] CSET Georgetown, “Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls,” June 2024.
[17] Huawei Ascend timeline: original 910 launched 2019 (TSMC 7nm). After US Entity List cut off TSMC access, Huawei shifted to SMIC’s 7nm-class process: 910B shipping 2023; 910C (two 910B chiplets) entered production 2024. Tom’s Hardware: “DeepSeek research suggests Huawei’s Ascend 910C delivers 60% of Nvidia H100 inference performance” — vendor-affiliated research, not independent testing. The 40% gap cited in the body text is inference-specific; training performance gaps may differ and have not been independently measured.
[18] US Department of Justice press release, December 8, 2025. Alan Hao Hsu, 43, of Missouri City, Texas, pleaded guilty to smuggling 3,872 H100 + 3,160 H200 GPUs worth $160.8M, relabeled under the fictitious “SANDKYAN” brand, routed through Singapore and Malaysia to China. Characterized as “the first-ever conviction in an artificial intelligence technology smuggling case.”
[19] House Select Committee on China, Chairman John Moolenaar, letter to Commerce Secretary Howard Lutnick, January 29, 2026. The letter details documents produced to the committee revealing Nvidia provided extensive technical support to DeepSeek, enabling it to extract frontier performance from export-controlled H800 chips by co-optimizing algorithms, software, and hardware. Moolenaar: “These findings demonstrate why rigorous enforcement of the Department’s H200 export rule, which requires certification that chips will not serve military purposes, is essential — even if such enforcement effectively prevents H200 exports to the PRC altogether.” Nvidia responded that China “has more than enough domestic chips for all of its military applications” and that “it makes no sense for the Chinese military to depend on American technology.” The committee’s April 2025 report separately documented DeepSeek routing American user data through infrastructure tied to a US-designated Chinese military company. Source: CSET Georgetown (Sam Bresnick and Cole McFaul), The Hill, December 3, 2025, provided additional expert analysis.
[20] Reuters investigation, October 27, 2025. Analysis of PLA patent filings identified 35 applications referencing Nvidia A100 chips filed by the National University of Defense Technology (NUDT) and the “Seven Sons” — a group of Chinese universities under US sanctions with a history of defense-related research. A PLA Rocket Force University of Engineering patent, filed as recently as June 2025, described a remote-sensing target-detection system trained on A100 chips. In the same period, 15 PLA patents cited Huawei Ascend chips, showing the transition to domestic hardware is underway but incomplete. Separate PLA procurement documents identified by CSET specified Nvidia H100 GPUs for AI algorithm calculations and clusters of Nvidia A800s for image-processing workstations — both export-controlled chips.
[21] Hoover Institution/Stanford HAI, “A Deep Peek into DeepSeek AI’s Talent and Implications for US Innovation,” April 21, 2025. Analyzed 211 of 223 researchers across five DeepSeek papers. 111/201 (55%) trained exclusively at Chinese institutions. 49 (24%) had any US affiliation.
[22] Liang Wenfeng interview, 36Kr (Chinese technology publication). Cited by the Hoover Institution and multiple English-language analyses. Independent academic analysis (Collegetowns Substack; Journal of International Students, OJED, 2025) confirmed that traceable core contributors were educated entirely within China’s domestic system.
[23] DeepSeek reported the compute cost for training DeepSeek-V3 at approximately $5.576 million (2.788 million H800 GPU hours), with an implied rate of ~$2/hour that is consistent with market estimates for H800 cloud pricing. This figure covers GPU compute costs only — it excludes researcher salaries, R&D costs, failed experiments, infrastructure overhead, and the accumulated knowledge from prior model generations (V1, V2, R1) that informed V3’s architecture. Total investment in DeepSeek’s AI research program is substantially higher. The figure is analytically relevant because it demonstrates compute cost deflation for frontier-class models, not because it represents the true cost of frontier AI development.
[24] Defense Secretary Pete Hegseth, post on X, February 27, 2026. Approximately 1 hour earlier, President Trump posted on Truth Social, directing federal agencies to cease using Anthropic. First application of this designation to an American company. Anthropic called the designation “legally unsound” and stated it would challenge in court. Anthropic’s two red lines: no fully autonomous weapons, no mass domestic surveillance.
[25] OpenAI CEO Sam Altman announced a classified Pentagon contract on February 27. OpenAI accepted the “all lawful purposes” framework that Anthropic had refused. Altman acknowledged the deal was “definitely rushed” and that “the optics don’t look good.” The pattern extended further: xAI had signed a classified deployment agreement on February 23, also accepting the “all lawful purposes” standard. Three companies, three outcomes: Anthropic refused and was designated; xAI and OpenAI accepted and were rewarded.
[26] EU AI Act, Article 2(3): “This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes.” Legal basis: Article 4(2) TEU.
[27] Helsing raised €600M at a valuation of €12B (June 2025). Partnership with Mistral to develop vision-language-action models for autonomous weapons systems (announced February 2025 at Paris AI Action Summit). Both companies operate within the EU; the AI Act’s risk classifications do not apply to their defense products. Meanwhile, €800B in new European defense spending under ReArm Europe explicitly includes autonomous systems and military AI (European Commission, March 2025).
[28] The CCW Group of Governmental Experts on LAWS has met annually since 2014. The CCW operates by consensus. Russia has opposed binding restrictions. The US has opposed new treaty obligations. Israel has resisted constraints on autonomous weapons. No binding instrument produced.
[29] The Chemical Weapons Convention: near-universal membership, mandatory declarations, on-site inspections, no military exemption. Took over two decades to negotiate (serious discussions began in 1968; the treaty opened for signature in 1993). Even with approximately 200 inspectors and an €80M annual budget, the OPCW confirmed Syria’s repeated use of chemical weapons (2013–2018) but could not compel compliance when Russia blocked Security Council enforcement.

