The United States found itself at a defining moment in the interplay between cutting-edge artificial intelligence and the law of export controls. What had been a regulatory framework historically built around hardware and semiconductors has, for the first time, been recalibrated to cover software artefacts and AI model internals themselves. This shift is neither superficial nor technical, it represents a profound evolution in export control law such that intellectual products of AI and the computational infrastructure that produces them sit at the centre of national security and geoeconomic strategy. The legal logic underpinning this regime combines a sophisticated application of export control statutes with a nuanced understanding of how frontier technologies propagate across borders.
Legal Foundations for AI Controls under US Export Law
Export controls in the United States trace their modern legal authority to the Export Administration Regulations (EAR), codified at 15 C.F.R. parts 730-774, implemented under the aegis of the Export Control Reform Act of 2018. The EAR empowers the Department of Commerce’s Bureau of Industry and Security (BIS) to regulate the export, re-export and in-country transfer of dual-use goods, software and technology that are deemed critical to national security or foreign policy interests. These statutes have historically been applied to tangible goods such as integrated circuits, advanced processors and specialised equipment crucial for high-performance computing. In 2025, BIS expanded this framework in a manner unprecedented in both doctrine and breadth by including advanced AI model weights and software artefacts as items subject to export controls.
The statutory logic is rooted in the dual-use nature of AI technologies. Advanced AI models, the closed-source, high-performance neural networks that drive state-of-the-art capabilities, can serve commercial and benign purposes, but they also present potential risks if accessed by actors hostile to United States interests. These risks include accelerated development of autonomous weapons systems, enhanced cyber operations and intelligence exploitation. Export controls which have been traditionally designed to restrict physical hardware, have thus been adapted to mirror how contemporary technological value is embedded in algorithms, model parameters and data flows, not merely in chips or circuit boards.
Interim Final Rule and Its Legal Architecture
On 13 January 2025, BIS published an Interim Final Rule (IFR) amending the EAR’s scope to include both certain advanced integrated circuits and the “model weights” of AI systems. This rule represents a deliberate legal decision to treat controlled AI model weights as items whose cross-border transfer demands scrutiny akin to that of high‐end hardware. The rule introduces Export Control Classification Number (ECCN) 4E091, which captures “closed-weight” AI models trained with computational workloads above a defined threshold of 10^26 operations. Relevant AI model parameters are thereby enveloped under the EAR and are subject to a global licensing requirement unless they fall within narrowly defined exceptions.
This structural choice reflects a legal rationale that the value and risk associated with advanced AI do not reside in physical form alone. In practical terms, “model weights” represent the distilled intellectual content of an AI model, the mathematical parameters learned during training that determine behaviour and output. BIS’s regulatory approach acknowledges that controlling the flow of this intellectual content is now as important as controlling the physical semiconductors that enable its creation.
The IFR’s licensing regime is deliberately exclusionary on its face: for AI model weights falling under ECCN 4E091, exports and transfers to destinations other than designated allied or partner countries require a licence. Moreover, BIS commits to reviewing such licence applications with a presumption of denial unless compelling mitigation of national security risk is demonstrated. This presumption reflects an explicit legal prioritisation of security over commercial proliferation, a point that will inevitably draw rigorous judicial and policy debate.
ECCNs, Foreign Direct Product Rule and Red Flags
The interim rule did more than define new categories. It embedded nuanced legal mechanisms designed to expand and assert jurisdiction over foreign-developed AI artefacts that are functionally derivative of US-controlled technologies. Under a revised Foreign Direct Product Rule (FDPR), model weights generated abroad using equipment or software subject to the EAR can themselves be treated as though they are subject to the EAR when they satisfy the ECCN criteria. In essence, BIS has signalled that jurisdiction over intangible technological artefacts can follow the trail of computational lineage more than the trail of physical geography.
This expansion into the intangible domain is not without legal precedent: the FDPR has historically been used to capture foreign-produced items that are direct products of US origin technology or production equipment. Its application to AI model weights brings export control law into conceptual territory traditionally occupied by intellectual property and software regulation. The legal consequence is significant: companies must analyse not only where a model was trained but also the provenance of the computational tools employed, inviting complex compliance assessments that traverse corporate boundaries and technological architectures.
In parallel, BIS guidance documents have underscored that seemingly routine cloud-based training operations can trigger export control liability. For example, if a US cloud provider’s infrastructure supports the training of a controlled AI model for a foreign entity, that domestic transfer of model weights to a subsidiary or affiliate may, in the regulator’s view, implicate the EAR, absent explicit licensing or clear exemptions. This interpretation signals a deliberate emphasis on enforcement reach that extends into domestic operational ecosystems tied to global computing infrastructures.
Title Exceptions and Strategic Carve-Outs
Recognising both domestic and allied reliance on AI development, the interim rule carved out License Exception “Artificial Intelligence Authorization” (AIA). AIA permits controlled exports to entities headquartered in designated allied countries, provided that extensive certification and compliance criteria are satisfied. Exempting allied transfers while maintaining stringent controls elsewhere aligns with longstanding export control practice, which historically differentiates between national security partners and other jurisdictions. Nonetheless, the legal scaffolding for such preference raises questions about sovereign equality under international trade norms, especially where licensing exceptions reflect geopolitical alliances rather than strictly technical risk assessments.
Additionally, the rule excludes open-weight models, those publicly disclosed and available, from control. This is a calculated legal acknowledgement that open science and publicly disseminated technology do not, by definition, create the same risk profile as controlled intellectual property. However, the exemption also incentivises broader open-source practices, blurring the traditional legal distinction between proprietary and public goods within export control regimes.
The Rescission and Its Legal Implications
Despite its ambitious design, the interim rule’s comprehensive architecture faced substantial backlash from industry stakeholders and allied governments. By May 2025, BIS formally rescinded the “Framework for Artificial Intelligence Diffusion” rule, citing concerns that its compliance burdens could hamper US innovation and strain international cooperation. The rescission underscores a fundamental tension in export control law between strategic protection and the legal endorsement of open markets and allied trust.
However, the doctrine underpinning the interim rule has not dissipated. BIS concurrently issued guidance that recalibrates enforcement focus and reasserts the legal significance of controlling advanced computing and AI-related exports. The guidance emphasises due diligence obligations, red flag indicators of unlicensed transfers, and an elevated enforcement posture that integrates licensing scrutiny into a broader compliance ecosystem.
Legal Strategy and Compliance in a Shifting Export Landscape
For legal practitioners advising multinational technology firms, the 2025 export regime presents a complex compliance frontier. Lawyers must navigate a matrix of regulatory categories, interpretive guidance, and shifting policy signals. Key questions include whether a foreign-trained AI model qualifies as an exportable item under the EAR, how the FDPR applies in cross-border scenarios, and the conditions under which licence exceptions may be available.
In addition, there is a growing need to harmonise internal corporate governance structures with external legal obligations. Compliance programmes must now account for in-house training, third-party cloud providers, international research collaborations and data residency practices, each of which can implicate export control risk. The legal profession’s role is thus not confined to transactional licensing but extends into proactive governance design, risk forecasting, and strategic cross-border operational planning.
Export Control as Legal Strategy
The evolution of the US AI export control regime in 2025 reflects a fundamental legal conclusion: advanced artificial intelligence, as a body of intellectual activity, merits treatment akin to physical dual-use goods when its diffusion intersects with national security interests. By extending export control law into the realms of software, computational artefacts, and distributed data flows, the US government has asserted a form of legal sovereignty over the global lifecycle of AI innovation.
Yet this approach raises enduring questions about the balance between innovation facilitation and strategic restriction, international trade obligations and unilateral regulatory assertions, and the role of law in shaping technological hegemonies. As the legal edifice around AI export controls continues to develop, including potential replacement rules and further refinement of licensing doctrines, lawyers must engage with export control not as a compliance checkbox but as a dynamic instrument of geopolitical legal strategy.