General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk

Classification and Requirements for Providers

Gregory Smith, Karlyn D. Stanley, Krystyna Marcinek, Paul Cormarie, Salil Gunashekar

ResearchPublished Aug 8, 2024

Photo by Suriyo/Adobe Stock

Key Takeaways

The European Union (EU)'s Artificial Intelligence (AI) Act regulations will apply to U.S. companies seeking to operate in the EU market.a These regulations include several obligations specific to the most-advanced and most-powerful AI models, which the EU AI Act categorizes as general-purpose AI (GPAI) models. Within this group of models is an even more targeted category of high-impact-capability GPAI models, classified as GPAI models with potential systemic risk. The European Union’s treatment of such high-impact-capability GPAI models overlaps with reporting requirements that the United States created in 2023 for "dual-use foundation models";b however, the initial thresholds for models covered differ between the United States and the European Union, and the exact standards for identifying risk are inchoate for both.

The U.S. government has several options regarding GPAI models to consider in response to the EU AI Act, falling into three broad categories:

  • Cooperate on standard setting for evaluations.
    • Policymakers could consider empowering the National Institute of Standards and Technology to collaborate with the EU AI Office and other international bodies to harmonize AI model testing practices and evaluation standards across the United States, the United Kingdom, and the European Union.
  • Harmonize reporting requirements.
    • Consider requiring that any information from U.S. companies reported to the European Union under the GPAI provisions of the EU AI Act be reported to the U.S. government as well.
    • Consider requiring that information that the EU AI Act requires to be disclosed to third-party deployers of GPAI models in the European Union also be disclosed to third-party deployers of such models in the United States.
  • Track companies' risk-management and incident reporting.
    • The United States could implement requirements that companies report their risk-management and incident-reporting frameworks for serious incidents that occur with their models, particularly for such high-risk areas as biosecurity and cybersecurity. These could be the same as those in the EU AI Act or just require that any incident reported to the European Union also be reported to U.S. authorities.
    • The United States could also mimic the EU AI Act's requirements that the most-powerful models be given a certain level of cybersecurity protections.

a European Union, "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance," June 13, 2024.

b Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," Executive Office of the President, October 30, 2023.

The European Union (EU)'s Artificial Intelligence (AI) Act is a landmark piece of legislation that lays out a detailed and wide-ranging framework for the comprehensive regulation of AI deployment in the European Union covering the development, testing, and use of AI. As part of this framework, European policymakers have included rules specific to the most-advanced and most-powerful AI, often referred to as foundation models, that have driven much of the recent improvement in AI capabilities.[1] This report provides a snapshot of the European Union's regulation of such models, which are often central to AI governance discussions in the United States, the European Union, and globally. The report does not aim to provide a comprehensive analysis but rather is intended to spark dialogue among stakeholders on specific facets of AI governance, especially as AI applications proliferate worldwide and complex governance debates persist. Although we refrain from offering definitive recommendations in this report, we explore a set of priority options that the United States could consider in relation to different aspects of AI governance in light of the EU AI Act. This is done to help inform U.S. policymakers on decisions made internationally regarding AI development that, in turn, could inform whether U.S. policymakers wish to incorporate such models into their own proposals or take a different approach to AI development from foreign countries and seek a more hands-off approach to AI regulation.

Under the EU AI Act, most foundation models are categorized as general-purpose AI (GPAI) and have special requirements imposed on them in addition to other rules that might attach under the EU AI Act’s other regulatory provisions. Some of the requirements the European Union has put forward (such as the testing requirements for models posing a systemic risk) dovetail with existing U.S. actions regarding AI (such as the reporting requirements for certain developers to report on tests performed on certain classes of AI under the Defense Production Act).[2] However, the EU AI Act's requirements do mean that U.S. companies will now be subject to reporting and testing requirements for their AI models if they wish to sell their products in the European Union, regardless of what actions or policies the United States chooses to pursue regulation of AI. These rules, therefore, are of significant importance to AI regulation globally, both as a potential model for future AI regulation and because of the requirements they will place on U.S. companies.

Regulatory Requirements for GPAI Models and AI Models That Pose a Systemic Risk

The EU AI Act breaks GPAI models into two categories. The first category is simply those models that count as GPAI; the second category applies to a subset of these models that are identified as posing a systemic risk.[3] This section will first summarize the requirements for all GPAI models and then detail the specific additional requirements imposed on GPAI models that pose a systemic risk under the EU AI Act. It should be noted that the GPAI regulations do not preclude requirements being imposed under other sections of the EU AI Act: For example, a GPAI model can be integrated into another AI system and be subject to additional regulations as a high-risk system.[4]

AI models are categorized as GPAI models when they display the capability to competently perform a wide variety of distinct tasks.[5] The EU AI Act's text clarifies that "models with at least a billion of [sic] parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality."[6] The EU AI Act then applies requirements to the GPAI model's provider, which the EU AI Act defines as the party that develops the AI system and makes it available in the EU market or supplies it to a third-party deployer of the model in the European Union.[7] This means that the party required to comply with these regulations is generally going to be the developer of that AI model—even if that company is not based in the European Union—so long as the company is making its model available within the European Union directly or to third parties who then deploy it.

There are four broad requirements that the EU AI Act imposes on a GPAI model's provider:

  • Create technical documentation, including training and testing processes and results, for oversight bodies.
  • Provide information and technical documentation that allow downstream providers to comply with their obligations when they integrate the GPAI model into an AI system, especially when it becomes a high-risk system.[8]
  • Establish a policy to comply with the European Union's copyright directive.
  • Publish a detailed summary of the content used to train the GPAI model.[9]

Under certain conditions, the first two categories do not apply to those providers who make their models accessible to the public "under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available."[10]

GPAI model providers are required to "draw up and keep up-to-date" documentation containing a general description of the model’s intended tasks, the architecture, the number of parameters, modality and format of inputs and outputs, types and nature of AI systems in which the GPAI model can be integrated, the date and method of release and distribution, what acceptable use policies the GPAI model developer maintains, and the license associated with the model.[11] Providers must make some of this information available to those "who intend to integrate the general-purpose AI in their AI system."[12] They must also make available relevant information regarding the technical means required for the GPAI model integration in an AI system, as well as some information about the dataset used for training, testing, and validation.[13] However, the information required to be provided to the EU AI Office is more detailed than the information that must be provided to downstream providers. For example, the AI Office established by the EU AI Act and the European Commission has the right to request information about model design, training, and estimated energy consumption that a downstream provider does not have a right to.[14] In specific cases, the national competent authorities also have this right.[15] The European Commission has also been empowered to further detail these requirements in delegated acts.[16]

Regulatory Requirements Specific to GPAI Models Identified as a Systemic Risk

Certain GPAI models can also be identified as having high-impact capabilities and, therefore, be classified as having systemic risk and be subject to additional regulatory requirements.[17] This can occur under two circumstances:

  1. The GPAI model is determined to have "high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks."[18] Currently, the high-impact-capability threshold is defined as at least 10^25 floating point operations (FLOP)[19] or more of compute being used for model training, but those tools and benchmarks might be updated by the European Commission at its discretion to reflect technological advances.[20]
  2. The commission may, either ex officio or following a qualified alert by the EU AI Act's scientific panel,[21] determine that a GPAI model has high-impact capabilities, even if the model does not meet the 10^25 FLOP threshold.[22]

Several additional obligations are imposed on the providers of GPAI models with systemic risks. First, they are required to "perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks."[23] Though the EU AI Act itself does not define adversarial testing, it is possible that red-teaming—structured testing to identify a model's flaws, vulnerabilities, and unforeseen behaviors—will now be required for such models.[24]

Second, the model provider must "assess and mitigate" possible systemic risks associated with the model.[25] What these risks might be and how they should be managed will be explained in the codes of practice, which will be developed by the EU AI Office with inputs from providers by May 2, 2025 at the latest.[26]

Third, providers must "keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measured to address them."[27]

Fourth, providers must "ensure an adequate level of cybersecurity" for the model and its physical infrastructure.[28] The EU AI Act also empowers the commission to maintain and publish a list of all GPAI models with systemic risk.[29]

In addition, unlike the broader regulations for all GPAI models, there are no regulatory exemptions from the regulations for open-source GPAI models that pose systemic risk. Therefore, open-source developers will also have to comply with all the regulatory requirements outlined above if they wish to provide a GPAI model with systemic risk to the EU market.

Many of the specific requirements, such as what tests might be required for a GPAI model with systemic risks, are not fully defined in the EU AI Act itself. As part of creating such rules, the EU AI Office established by the EU AI Act has invited GPAI providers, along with regulators and civil society bodies, to participate in drawing up codes of practice, which would clarify how GPAI providers could demonstrate compliance as well as how developers would be required to manage the systemic risks of their models.[30]

The EU AI Act also provides for penalties should a party not comply with these GPAI-specific regulations. If a provider of a GPAI model violates these provisions, it may be subject to a fine of "3% of [its] annual total worldwide turnover in the preceding financial year or [15 million euros], whichever is higher."[31]

Implications of the EU AI Act's GPAI Regulations for U.S. Companies

A key implication of the GPAI systemic risk rules outlined above is that the most-advanced foundation models currently available to the public are already subject to the GPAI systemic risk provisions of the EU AI Act. For example, according to the European Commission, the 10^25 FLOP threshold captures OpenAI's GPT-4 and possibly Google's Gemini.[32] Furthermore, the regulation posits that systemic risk increases with greater model capability; the amount of computing power used to train the model is only one possible approximation of that risk.[33] Consequently, it is possible that all future, more-capable foundation models developed in the United States whose developers wish to provide these models in the European Union will also be subject to the EU AI Act’s systemic risk regulations, regardless of the exact number of FLOP used for training.[34]

While U.S. companies could avoid the regulatory requirements of the EU AI Act by never deploying their models in the European Union, it is unlikely that the large multinational companies deploying AI models will permanently avoid the EU market in order to avoid the regulations on GPAI models. It is possible that the majority of commercial models developed in the United States will follow and comply with EU regulations on GPAI with systemic risk regardless of what legislation or regulation exists in the United States. Therefore, it is also possible that—regardless of whatever approach the United States takes to AI regulation—there will be some sort of testing regime with which major AI labs in the United States are likely to comply, and the results of those tests will be reported to EU bodies.

Furthermore, this presents an opportunity for U.S. companies to engage with the development of the EU AI codes of practice—a set of documents that the EU AI Act has instructed the EU AI Office to create that will “contribute to the proper application” of the EU AI Act.[35] The EU AI Act empowers the EU AI Office to invite "providers of general-purpose AI models . . . to participate in the drawing-up of codes of practice."[36] Only a limited number of companies meet the threshold of high-impact capability, and many of them are based in the United States; therefore, U.S. companies will have the opportunity to contribute to these standards if they provide input to the commission and other EU offices as the codes of practice are being formulated in the coming year.

Convergences Between U.S. AI Policy and the GPAI Provisions of the EU AI Act

Although the EU AI Act does impose requirements on leading U.S. AI companies seeking to enter the European market, these requirements should not be considered entirely novel. In certain key respects, they converge with existing AI policy in the United States.

Most notably, the definition and reporting requirements imposed by the EU AI Act on GPAI models posing a systemic risk dovetail with those issued for "dual-use foundation models" in the October 30, 2023, Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110).[37] The EO requires that developers of “dual-use foundation models”—meaning any model that "exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters"—must report any development plans, cybersecurity protections of those models, and the results of any red-teaming carried out on those models to the Bureau of Industrial Security within the Department of Commerce under the Defense Production Act.[38] This requirement is similar to the EU AI Act's testing and reporting requirements for GPAI models with systemic risk, though the EU AI Act mandates that certain tests be performed and imposes financial fines for noncompliance, while EO 14110 requires only the reporting of information on such tests if they have actually been conducted.

Both regulations define models with higher reporting requirements through model capabilities and performance. However, it should be noted that the regulations currently use a different FLOP threshold: The EU AI Act sets it at 10^25 FLOP, while the EO uses the 10^26 FLOP threshold.[39] Both thresholds can be updated if judged necessary by the European Union or United States. For now, however, this discrepancy means that, under the regulation, the European Union will require testing on a broader set of models than is called for in the United States.

The EU AI Act and EO 14110 are also complementary in emphasizing testing and red-teaming of AI foundation models. EO 14110 directs the National Institute of Standards and Technology (NIST) to identify and establish guidelines and standards to test AI for dangerous capabilities and directs NIST to establish broader standards to assess an AI's "safety, security, and trustworthiness."[40] The EU AI Act requires that testing be conducted using "state of the art" benchmarks but does not specify what benchmarks should be used for such testing. These similar approaches create an opportunity for synergy and collaboration between the work of NIST and the EU AI Office in creating guidelines and cutting-edge benchmarks for model testing.

Options for U.S. Response to the EU AI Act

The EU AI Act provides the United States with an opportunity to respond to and potentially collaborate with the European Union as the latter implements the EU AI Act because the two share a goal of making AI models safer and more trustworthy. The EU AI Act's regulation of GPAI and potential systemic risk from the most-powerful systems provides a potential template for more-comprehensive regulation of AI in the United States. One option would be for the United States to adopt some or all of the provisions of the EU AI Act on GPAI models, such as the reporting requirements for AI models identified as posing a systemic risk. This could harmonize aspects of the regulatory environment in both the United States and Europe and ensure that U.S. consumers would have the same level of protection as that extended to European consumers.

However, it may not be desirable to implement EU regulation of GPAI wholesale; many specifics of the European Union's implementation of its GPAI provisions are not yet known. Instead, the United States could focus its efforts on only those models that the European Union would classify as being a systemic risk.

There are three specific areas in which the United States could implement policy alongside the EU AI Act now without having to adopt the entire EU model: (1) cooperating on standards for evaluating AI, (2) harmonizing reporting requirements between the United States and the European Union, and (3) tracking companies' risk-management and incident-mitigation efforts.

Cooperating on AI Evaluation Standards

The United States could play a more active role in developing the standards by which AI models are evaluated under the EU AI Act and other rules. A key mechanism for such collaboration is the EU AI Office, which was established by the EU AI Act and empowered to "contribut[e] to international cooperation related to AI regulation and governance."[41] NIST and other U.S. bodies with an interest in AI policy from both the government and private sector could be encouraged to collaborate with the EU AI Office to establish standards for GPAI evaluation. Specifically, NIST's work under EO 14110 might help develop benchmarks and identify standards for AI evaluation. This would provide the United States with an opportunity to lead on developing the new science of evaluations alongside its strategic partners in Europe and promote a harmonized set of AI testing results in both jurisdictions. Initial steps in this direction have already been taken: The EU AI Office and the U.S. AI Safety Institute have "committed to establishing a Dialogue to deepen their collaboration."[42]

The AI Safety Institute, which the United Kingdom recently established to help research and promote safe AI development practices,[43] might be another natural partner in developing the appropriate testing practices and benchmarks that will be necessary for implementing the EU AI Act's testing requirements. Initial actions fostering such cooperative endeavors are already being undertaken between the United States and the United Kingdom.[44] Harmonizing these testing practices across the United States, the United Kingdom, the European Union, and such countries as Singapore and Japan (which have both taken steps on AI testing)[45] could help support a shared AI ecosystem in which companies can easily operate across multiple markets.

Harmonizing Reporting Requirements Between the United States and the European Union

The EU AI Act places multiple reporting and information-sharing requirements on GPAI providers operating in the European Union. These requirements will exist regardless of U.S. policymaking decisions; therefore, the United States could ensure that it has access to the same information as EU decisionmakers. For example, the United States could pass a requirement that any information reported to the European Union under the EU AI Act's regulation of GPAI also be reported to U.S. regulators and policymakers. This would potentially expand the amount of information reported to the U.S. government beyond what EO 14110 currently requires. (The EO currently requires only the reporting of tests that an AI model developer undertakes; it does not mandate any tests, whereas the EU AI Act does mandate certain evaluations.) However, this policy could ensure that the United States has the same level of visibility into its companies' products as EU policymakers will have into theirs regarding such issues as systemic risks and any serious incidents that occur in such models.

The United States could also maintain EO 14110 reporting requirements for AI models above the EO’s 10^26 FLOP threshold. This would create a separate reporting requirement from the European Union's, but it would ensure that the United States could continue to gather some information regarding the national security–relevant capabilities of such models.

U.S. policymakers could also mirror the EU AI Act's requirement that GPAI deployers provide technical documentation to their third-party partners and to deployers that use their models. Implementing a similar requirement in the U.S. market could ensure that U.S. companies using AI models provided by other companies would have the same access to information about those AI models as European companies deploying the same AI models.

Tracking Companies' Risk-Management Efforts and Incident Mitigation

Third, the EU AI Act requires that providers of GPAI models with systemic risk "assess and mitigate" such risks as well as report "serious incidents" and "possible corrective measures" to address such incidents.[46] The United States might consider similar requirements that major risks in models be reported to the government, perhaps for a set of particularly significant risks, such as bioweapon and/or cyberweapon production. Such a requirement also could include formalizing incident-reporting protocols if a company's AI were used to try to produce such threats. These requirements could also broaden to include companies reporting risk-management practices for their AI models so that the U.S. government could understand how companies were mitigating the potential threats their models might pose.

The EU AI Act also imposes cybersecurity requirements on GPAI models with systemic risk.[47] Although these requirements are not yet defined, the United States might want to consider similar requirements for the developers of the most-powerful models to ensure that they are not stolen by dangerous actors.

Notes

  • [1] For a discussion of foundation models, see Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al., "On the Opportunities and Risks of Foundation Models," arXiv, arXiv:2108.07258, July 12, 2022.
  • [2] Executive Order (EO) 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," Executive Office of the President, October 30, 2023, Section 4.2, Ensuring Safe and Reliable AI. As of June 3, 2024:
    https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  • [3] European Union, "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance,” June 13, 2024, Recital 97. Hereafter cited as the EU AI Act, this legislation was adopted by the European Parliament in March 2024 and approved by the European Council in June 2024. All text cited in this report related to the EU AI Act can be found at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689#d1e5435-1-1
  • [4] EU AI Act, Recital 85 notes that "[g]eneral-purpose AI systems may be used as high-risk AI systems by themselves or be components of other high-risk AI systems" and that providers of general-purpose AI systems should “closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance” with the EU AI Act. EU AI Act, Chap. III, Sec. 3, Art. 25(1)(c) notes that general-purpose AI systems which are not classified as high-risk might be modified to become high-risk systems under the Act. Separate requirements for high-risk systems are laid out in EU AI Act, Chap. III, Sec. 2.
  • [5] EU AI Act, Chap. I, Art. 3(63).
  • [6] Large AI models consist of parameters, which are the internal variables of the model that are "learned" during the training from the underlying training data. EU AI Act, Recital 98.
  • [7] EU AI Act, Chap. I, Art 3(3). Note that an entity may be classified as a "provider" if it places its AI system "on the market," which means to make an AI system available on the EU market (EU AI Act, Chap. I, Art. 3(9)) or "puts the AI system into service, meaning supplying an AI system for first use directly to a deployer or for the provider’s own use (EU AI Act, Chap. I, Art 3(11)).
  • [8] According to the EU AI Act, Chap. I, Art. 3(68), "'[D]ownstream provider' means a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations."
  • [9] EU AI Act, Chap. V, Sec. 2, Art. 53(1)(a-d).
  • [10] EU AI Act, Chap. V, Sec. 2, Art. 53(2).
  • [11] EU AI Act, Chap. V, Sec. 2, Art. 53(1)(a-b).
  • [12] EU AI Act, Chap. V, Sec. 2, Art. 53(1)(b); EU AI Act, ANNEX XII.
  • [13] EU AI Act, Annex XI(2), EU AI Act, Annex XII(2).
  • [14] EU AI Act, Annex XI lays out the requirement for information to be provided to the EU AI Office; EU AI Act. EU AI Act, Annex XII lays out the information to be provided to downstream providers. EU AI Act, Annex XI requires more information to be provided than Annex XII does; for more on this, see European Union, "Commission Decision of 24.1.2024 Establishing the European Artificial Intelligence Office," Brussels, January 24, 2024.
  • [15] National competent authorities refers to EU member state bodies—for example, "a notifying authority or a market surveillance authority"—that also have authority to act under the EU AI Act. See EU AI Act, Chap. I, Art. 3(48).
  • [16] EU AI Act, Chap. V, Sec. 2, Art. 53(5-6).
  • [17] Systemic risk is defined by the EU AI Act as "a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain." See EU AI Act, Chap. I, Art. 3(65).
  • [18] EU AI Act, Chap. V, Sec. 1, Art. 51(1)(a). The EU AI Act lays out several criteria the commission shall consider when determining whether a model has high-impact capabilities and should be classified as having systemic risk. Those include the parameters of the model, the data used in training, the compute used in training, the potential output of the model, its benchmarks on capabilities (including its ability to learn new tasks and act autonomously), and the number of registered end users. These criteria can be found in EU AI Act, ANNEX XIII.
  • [19] FLOP refers to the number of floating-point operations and is a sum of the calculations used to train a particular AI model. Broadly, the bigger the number of FLOP, the more computing power is used to train the model.
  • [20] EU AI Act, Chap. V, Sec. 1, Art 51(2-3).
  • [21] The EU AI Act also establishes a scientific panel of independent experts to support and advise the EU AI Office. The commission’s power to act ex officio allows it to determine whether a GPAI model has high-impact capabilities on the commission's own authority. EU AI Act, Chap. V, Sec. 1, Art. 51(1)(b).
  • [22] EU AI Act, Chap. V, Sec. 1, Art. 51(1).
  • [23] EU AI Act, Chap. V, Sec. 3, Art. 55(1)(a).
  • [24] EU AI Act, Annex XI, Sec. 2(2). On red-teaming, an example of such adversarial testing can be found in OpenAI, GPT-4 System Card, March 23, 2023. For further discussion of red-teaming models, see Marie-Laure Hicks, Ella Guest, Jess Whittlestone, Jacob Ohrvik-Stott, Sana Zakaria, Cecilia Ang, Chryssa Politi, Imogen Wade, and Salil Gunashekar, Exploring Red Teaming to Identify New and Emerging Risks from AI Foundation Models, RAND Corporation, CF-A3031-1, 2023. As of February 28, 2024: https://www.rand.org/pubs/conf_proceedings/CFA3031-1.html
  • [25] EU AI Act, Chap. V, Sec. 3, Art. 55(1)(b).
  • [26] These codes of practice are not yet developed, and what form they will take is uncertain as of this writing. EU AI Act, Chap. V, Sec.4, Art. 56(2,7-9).
  • [27] EU AI Act, Chap. V, Sec. 3, Art 55(1)(c). The EU AI Act defines a serious incident as
    "an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
    1. the death of a person, or serious harm to a person's health;
    2. a serious and irreversible disruption of the management and operation of critical infrastructure;
    3. the infringement of obligations under Union law intended to protect fundamental rights;
    4. serious harm to property or the environment." (EU AI Act, Chap. I, Art. 3(49))
  • [28] EU AI Act, Chap. V, Sec. 3, Art. 55(1)(d).
  • [29] EU AI Act, Chap. V, Sec. 1, Art. 52(6).
  • [30] EU AI Act, Chap. V, Sec. 4, Art. 56. On July 30, 2024, the European AI Office launched a public call for expressions of interest in participating in the drafting of the first GPAI Code of Practice. In addition to GPAI model providers, those invited included “downstream providers and other industry organisations, other stakeholder organisations such as civil society organisations or rightsholders organisations, as well as academia and other independent experts.” European Commission, “AI Act: Participate in the Drawing Up of the First General-Purpose AI Codes of Practice,” webpage, July 30, 2024. As of August 5, 2024: https://digital-strategy.ec.europa.eu/en/news/ai-act-participate-drawing-first-general-purpose-ai-code-practice.
  • [31] EU AI Act, Chap. XII, Art. 101(1).
  • [32] European Commission, "Artificial Intelligence—Questions and Answers*," webpage, December 12, 2023. As of February 8, 2024:
    https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683.
  • [33] EU AI Act, Recital 111.
  • [34] The number of FLOP being used to train advanced models is expected to continue to grow, but the European Commission recognizes that future technological advances might allow GPAI models to reach the same capabilities with fewer FLOP used to train them. Consequently, the European Commission allows for the possibility that the FLOP threshold can be updated upward or downward to capture systemic risk capabilities in the light of technological progress (European Commission, "Artificial Intelligence—Questions and Answers*," webpage, December 12, 2023. As of February 8, 2024: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683). Regardless, if some of today’s models are already classified as posing a systemic risk, their more-capable successors are all but guaranteed to fall under the same scrutiny.
  • [35] EU AI Act, Chap. V, Sec. 4, Art. 56(1).
  • [36] EU AI Act, Chap. V, Sec. 4, Art. 56(3).
  • [37] These requirements are in EO 14110, Section 4.2, Ensuring Safe and Reliable AI.
  • [38] EO 14110, Section 3, Definitions, paragraph k, defines dual use foundation model. EO 14110, Section 4.2, Ensuring Safe and Reliable AI, requires reporting under the Defense Production Act and describes the information to be reported under these requirements.
  • [39] EO 14110, Section 4.2(b)(i), Ensuring Safe and Reliable AI.
  • [40] EO 14110, Section 4.1, Developing Guidelines, Standards, and Best Practices for AI Safety and Security.
  • [41] European Commission, COMMISSION DECISION of 24.1.2024 Establishing the European Artificial Intelligence Office, C(2024) 390 final, Brussels, January 24, 2024, Article 7(1)(b).
  • [42] White House, "U.S.-EU Joint Statement of the Trade and Technology Council," April 5, 2024.
  • [43] UK AI Safety Institute, Introducing the AI Safety Institute, Department for Science, Innovation & Technology, updated January 17, 2024.
  • [44] U.S. Department of Commerce, "U.S. and UK Announce Partnership on Science of AI Safety," press release, April 1, 2024.
  • [45] Singapore's approach to AI governance is described in Singapore Personal Data Protection Commission, "Singapore’s Approach to AI Governance," webpage, undated. As of February 28, 2024: https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework. Japan's launch of its own AI Safety Institute is described in Ministry of Economy, Trade, and Industry, "Launch of AI Safety Institute," webpage, February 14, 2024. As of February 27, 2024: https://www.meti.go.jp/english/press/2024/0214_001.html.
  • [46] EU AI Act, Chap. V, Sec. 3, Art. 55(1)(b-c).
  • [47] EU AI Act, Chap. V, Sec. 3, Art. 55(d).

Topics

Document Details

Citation

RAND Style Manual
Smith, Gregory, Karlyn D. Stanley, Krystyna Marcinek, Paul Cormarie, and Salil Gunashekar, General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk: Classification and Requirements for Providers, RAND Corporation, RR-A3243-1, 2024. As of September 11, 2024: https://www.rand.org/pubs/research_reports/RRA3243-1.html
Chicago Manual of Style
Smith, Gregory, Karlyn D. Stanley, Krystyna Marcinek, Paul Cormarie, and Salil Gunashekar, General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk: Classification and Requirements for Providers. Santa Monica, CA: RAND Corporation, 2024. https://www.rand.org/pubs/research_reports/RRA3243-1.html.
BibTeX RIS

This research was sponsored by RAND's Institute for Civil Justice and conducted in the Justice Policy Program within RAND Social and Economic Well-Being and the Science and Emerging Technology Research Group within RAND Europe.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.