Report
Artificial Intelligence Impacts on Privacy Law
Aug 8, 2024
Classification and Requirements for Providers
ResearchPublished Aug 8, 2024
Photo by Suriyo/Adobe Stock
The European Union (EU)'s Artificial Intelligence (AI) Act regulations will apply to U.S. companies seeking to operate in the EU market.a These regulations include several obligations specific to the most-advanced and most-powerful AI models, which the EU AI Act categorizes as general-purpose AI (GPAI) models. Within this group of models is an even more targeted category of high-impact-capability GPAI models, classified as GPAI models with potential systemic risk. The European Union’s treatment of such high-impact-capability GPAI models overlaps with reporting requirements that the United States created in 2023 for "dual-use foundation models";b however, the initial thresholds for models covered differ between the United States and the European Union, and the exact standards for identifying risk are inchoate for both.
The U.S. government has several options regarding GPAI models to consider in response to the EU AI Act, falling into three broad categories:
a European Union, "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance," June 13, 2024.
b Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," Executive Office of the President, October 30, 2023.
The European Union (EU)'s Artificial Intelligence (AI) Act is a landmark piece of legislation that lays out a detailed and wide-ranging framework for the comprehensive regulation of AI deployment in the European Union covering the development, testing, and use of AI. As part of this framework, European policymakers have included rules specific to the most-advanced and most-powerful AI, often referred to as foundation models, that have driven much of the recent improvement in AI capabilities.[1] This report provides a snapshot of the European Union's regulation of such models, which are often central to AI governance discussions in the United States, the European Union, and globally. The report does not aim to provide a comprehensive analysis but rather is intended to spark dialogue among stakeholders on specific facets of AI governance, especially as AI applications proliferate worldwide and complex governance debates persist. Although we refrain from offering definitive recommendations in this report, we explore a set of priority options that the United States could consider in relation to different aspects of AI governance in light of the EU AI Act. This is done to help inform U.S. policymakers on decisions made internationally regarding AI development that, in turn, could inform whether U.S. policymakers wish to incorporate such models into their own proposals or take a different approach to AI development from foreign countries and seek a more hands-off approach to AI regulation.
Under the EU AI Act, most foundation models are categorized as general-purpose AI (GPAI) and have special requirements imposed on them in addition to other rules that might attach under the EU AI Act’s other regulatory provisions. Some of the requirements the European Union has put forward (such as the testing requirements for models posing a systemic risk) dovetail with existing U.S. actions regarding AI (such as the reporting requirements for certain developers to report on tests performed on certain classes of AI under the Defense Production Act).[2] However, the EU AI Act's requirements do mean that U.S. companies will now be subject to reporting and testing requirements for their AI models if they wish to sell their products in the European Union, regardless of what actions or policies the United States chooses to pursue regulation of AI. These rules, therefore, are of significant importance to AI regulation globally, both as a potential model for future AI regulation and because of the requirements they will place on U.S. companies.
The EU AI Act breaks GPAI models into two categories. The first category is simply those models that count as GPAI; the second category applies to a subset of these models that are identified as posing a systemic risk.[3] This section will first summarize the requirements for all GPAI models and then detail the specific additional requirements imposed on GPAI models that pose a systemic risk under the EU AI Act. It should be noted that the GPAI regulations do not preclude requirements being imposed under other sections of the EU AI Act: For example, a GPAI model can be integrated into another AI system and be subject to additional regulations as a high-risk system.[4]
AI models are categorized as GPAI models when they display the capability to competently perform a wide variety of distinct tasks.[5] The EU AI Act's text clarifies that "models with at least a billion of [sic] parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality."[6] The EU AI Act then applies requirements to the GPAI model's provider, which the EU AI Act defines as the party that develops the AI system and makes it available in the EU market or supplies it to a third-party deployer of the model in the European Union.[7] This means that the party required to comply with these regulations is generally going to be the developer of that AI model—even if that company is not based in the European Union—so long as the company is making its model available within the European Union directly or to third parties who then deploy it.
There are four broad requirements that the EU AI Act imposes on a GPAI model's provider:
Under certain conditions, the first two categories do not apply to those providers who make their models accessible to the public "under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available."[10]
GPAI model providers are required to "draw up and keep up-to-date" documentation containing a general description of the model’s intended tasks, the architecture, the number of parameters, modality and format of inputs and outputs, types and nature of AI systems in which the GPAI model can be integrated, the date and method of release and distribution, what acceptable use policies the GPAI model developer maintains, and the license associated with the model.[11] Providers must make some of this information available to those "who intend to integrate the general-purpose AI in their AI system."[12] They must also make available relevant information regarding the technical means required for the GPAI model integration in an AI system, as well as some information about the dataset used for training, testing, and validation.[13] However, the information required to be provided to the EU AI Office is more detailed than the information that must be provided to downstream providers. For example, the AI Office established by the EU AI Act and the European Commission has the right to request information about model design, training, and estimated energy consumption that a downstream provider does not have a right to.[14] In specific cases, the national competent authorities also have this right.[15] The European Commission has also been empowered to further detail these requirements in delegated acts.[16]
Certain GPAI models can also be identified as having high-impact capabilities and, therefore, be classified as having systemic risk and be subject to additional regulatory requirements.[17] This can occur under two circumstances:
Several additional obligations are imposed on the providers of GPAI models with systemic risks. First, they are required to "perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks."[23] Though the EU AI Act itself does not define adversarial testing, it is possible that red-teaming—structured testing to identify a model's flaws, vulnerabilities, and unforeseen behaviors—will now be required for such models.[24]
Second, the model provider must "assess and mitigate" possible systemic risks associated with the model.[25] What these risks might be and how they should be managed will be explained in the codes of practice, which will be developed by the EU AI Office with inputs from providers by May 2, 2025 at the latest.[26]
Third, providers must "keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measured to address them."[27]
Fourth, providers must "ensure an adequate level of cybersecurity" for the model and its physical infrastructure.[28] The EU AI Act also empowers the commission to maintain and publish a list of all GPAI models with systemic risk.[29]
In addition, unlike the broader regulations for all GPAI models, there are no regulatory exemptions from the regulations for open-source GPAI models that pose systemic risk. Therefore, open-source developers will also have to comply with all the regulatory requirements outlined above if they wish to provide a GPAI model with systemic risk to the EU market.
Many of the specific requirements, such as what tests might be required for a GPAI model with systemic risks, are not fully defined in the EU AI Act itself. As part of creating such rules, the EU AI Office established by the EU AI Act has invited GPAI providers, along with regulators and civil society bodies, to participate in drawing up codes of practice, which would clarify how GPAI providers could demonstrate compliance as well as how developers would be required to manage the systemic risks of their models.[30]
The EU AI Act also provides for penalties should a party not comply with these GPAI-specific regulations. If a provider of a GPAI model violates these provisions, it may be subject to a fine of "3% of [its] annual total worldwide turnover in the preceding financial year or [15 million euros], whichever is higher."[31]
A key implication of the GPAI systemic risk rules outlined above is that the most-advanced foundation models currently available to the public are already subject to the GPAI systemic risk provisions of the EU AI Act. For example, according to the European Commission, the 10^25 FLOP threshold captures OpenAI's GPT-4 and possibly Google's Gemini.[32] Furthermore, the regulation posits that systemic risk increases with greater model capability; the amount of computing power used to train the model is only one possible approximation of that risk.[33] Consequently, it is possible that all future, more-capable foundation models developed in the United States whose developers wish to provide these models in the European Union will also be subject to the EU AI Act’s systemic risk regulations, regardless of the exact number of FLOP used for training.[34]
While U.S. companies could avoid the regulatory requirements of the EU AI Act by never deploying their models in the European Union, it is unlikely that the large multinational companies deploying AI models will permanently avoid the EU market in order to avoid the regulations on GPAI models. It is possible that the majority of commercial models developed in the United States will follow and comply with EU regulations on GPAI with systemic risk regardless of what legislation or regulation exists in the United States. Therefore, it is also possible that—regardless of whatever approach the United States takes to AI regulation—there will be some sort of testing regime with which major AI labs in the United States are likely to comply, and the results of those tests will be reported to EU bodies.
Furthermore, this presents an opportunity for U.S. companies to engage with the development of the EU AI codes of practice—a set of documents that the EU AI Act has instructed the EU AI Office to create that will “contribute to the proper application” of the EU AI Act.[35] The EU AI Act empowers the EU AI Office to invite "providers of general-purpose AI models . . . to participate in the drawing-up of codes of practice."[36] Only a limited number of companies meet the threshold of high-impact capability, and many of them are based in the United States; therefore, U.S. companies will have the opportunity to contribute to these standards if they provide input to the commission and other EU offices as the codes of practice are being formulated in the coming year.
Although the EU AI Act does impose requirements on leading U.S. AI companies seeking to enter the European market, these requirements should not be considered entirely novel. In certain key respects, they converge with existing AI policy in the United States.
Most notably, the definition and reporting requirements imposed by the EU AI Act on GPAI models posing a systemic risk dovetail with those issued for "dual-use foundation models" in the October 30, 2023, Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110).[37] The EO requires that developers of “dual-use foundation models”—meaning any model that "exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters"—must report any development plans, cybersecurity protections of those models, and the results of any red-teaming carried out on those models to the Bureau of Industrial Security within the Department of Commerce under the Defense Production Act.[38] This requirement is similar to the EU AI Act's testing and reporting requirements for GPAI models with systemic risk, though the EU AI Act mandates that certain tests be performed and imposes financial fines for noncompliance, while EO 14110 requires only the reporting of information on such tests if they have actually been conducted.
Both regulations define models with higher reporting requirements through model capabilities and performance. However, it should be noted that the regulations currently use a different FLOP threshold: The EU AI Act sets it at 10^25 FLOP, while the EO uses the 10^26 FLOP threshold.[39] Both thresholds can be updated if judged necessary by the European Union or United States. For now, however, this discrepancy means that, under the regulation, the European Union will require testing on a broader set of models than is called for in the United States.
The EU AI Act and EO 14110 are also complementary in emphasizing testing and red-teaming of AI foundation models. EO 14110 directs the National Institute of Standards and Technology (NIST) to identify and establish guidelines and standards to test AI for dangerous capabilities and directs NIST to establish broader standards to assess an AI's "safety, security, and trustworthiness."[40] The EU AI Act requires that testing be conducted using "state of the art" benchmarks but does not specify what benchmarks should be used for such testing. These similar approaches create an opportunity for synergy and collaboration between the work of NIST and the EU AI Office in creating guidelines and cutting-edge benchmarks for model testing.
The EU AI Act provides the United States with an opportunity to respond to and potentially collaborate with the European Union as the latter implements the EU AI Act because the two share a goal of making AI models safer and more trustworthy. The EU AI Act's regulation of GPAI and potential systemic risk from the most-powerful systems provides a potential template for more-comprehensive regulation of AI in the United States. One option would be for the United States to adopt some or all of the provisions of the EU AI Act on GPAI models, such as the reporting requirements for AI models identified as posing a systemic risk. This could harmonize aspects of the regulatory environment in both the United States and Europe and ensure that U.S. consumers would have the same level of protection as that extended to European consumers.
However, it may not be desirable to implement EU regulation of GPAI wholesale; many specifics of the European Union's implementation of its GPAI provisions are not yet known. Instead, the United States could focus its efforts on only those models that the European Union would classify as being a systemic risk.
There are three specific areas in which the United States could implement policy alongside the EU AI Act now without having to adopt the entire EU model: (1) cooperating on standards for evaluating AI, (2) harmonizing reporting requirements between the United States and the European Union, and (3) tracking companies' risk-management and incident-mitigation efforts.
The United States could play a more active role in developing the standards by which AI models are evaluated under the EU AI Act and other rules. A key mechanism for such collaboration is the EU AI Office, which was established by the EU AI Act and empowered to "contribut[e] to international cooperation related to AI regulation and governance."[41] NIST and other U.S. bodies with an interest in AI policy from both the government and private sector could be encouraged to collaborate with the EU AI Office to establish standards for GPAI evaluation. Specifically, NIST's work under EO 14110 might help develop benchmarks and identify standards for AI evaluation. This would provide the United States with an opportunity to lead on developing the new science of evaluations alongside its strategic partners in Europe and promote a harmonized set of AI testing results in both jurisdictions. Initial steps in this direction have already been taken: The EU AI Office and the U.S. AI Safety Institute have "committed to establishing a Dialogue to deepen their collaboration."[42]
The AI Safety Institute, which the United Kingdom recently established to help research and promote safe AI development practices,[43] might be another natural partner in developing the appropriate testing practices and benchmarks that will be necessary for implementing the EU AI Act's testing requirements. Initial actions fostering such cooperative endeavors are already being undertaken between the United States and the United Kingdom.[44] Harmonizing these testing practices across the United States, the United Kingdom, the European Union, and such countries as Singapore and Japan (which have both taken steps on AI testing)[45] could help support a shared AI ecosystem in which companies can easily operate across multiple markets.
The EU AI Act places multiple reporting and information-sharing requirements on GPAI providers operating in the European Union. These requirements will exist regardless of U.S. policymaking decisions; therefore, the United States could ensure that it has access to the same information as EU decisionmakers. For example, the United States could pass a requirement that any information reported to the European Union under the EU AI Act's regulation of GPAI also be reported to U.S. regulators and policymakers. This would potentially expand the amount of information reported to the U.S. government beyond what EO 14110 currently requires. (The EO currently requires only the reporting of tests that an AI model developer undertakes; it does not mandate any tests, whereas the EU AI Act does mandate certain evaluations.) However, this policy could ensure that the United States has the same level of visibility into its companies' products as EU policymakers will have into theirs regarding such issues as systemic risks and any serious incidents that occur in such models.
The United States could also maintain EO 14110 reporting requirements for AI models above the EO’s 10^26 FLOP threshold. This would create a separate reporting requirement from the European Union's, but it would ensure that the United States could continue to gather some information regarding the national security–relevant capabilities of such models.
U.S. policymakers could also mirror the EU AI Act's requirement that GPAI deployers provide technical documentation to their third-party partners and to deployers that use their models. Implementing a similar requirement in the U.S. market could ensure that U.S. companies using AI models provided by other companies would have the same access to information about those AI models as European companies deploying the same AI models.
Third, the EU AI Act requires that providers of GPAI models with systemic risk "assess and mitigate" such risks as well as report "serious incidents" and "possible corrective measures" to address such incidents.[46] The United States might consider similar requirements that major risks in models be reported to the government, perhaps for a set of particularly significant risks, such as bioweapon and/or cyberweapon production. Such a requirement also could include formalizing incident-reporting protocols if a company's AI were used to try to produce such threats. These requirements could also broaden to include companies reporting risk-management practices for their AI models so that the U.S. government could understand how companies were mitigating the potential threats their models might pose.
The EU AI Act also imposes cybersecurity requirements on GPAI models with systemic risk.[47] Although these requirements are not yet defined, the United States might want to consider similar requirements for the developers of the most-powerful models to ensure that they are not stolen by dangerous actors.
"an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
- the death of a person, or serious harm to a person's health;
- a serious and irreversible disruption of the management and operation of critical infrastructure;
- the infringement of obligations under Union law intended to protect fundamental rights;
- serious harm to property or the environment." (EU AI Act, Chap. I, Art. 3(49))
This research was sponsored by RAND's Institute for Civil Justice and conducted in the Justice Policy Program within RAND Social and Economic Well-Being and the Science and Emerging Technology Research Group within RAND Europe.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.