alerts & publications
The European Commission Proposes Artificial Intelligence Regulations That Could Reach Businesses WorldwideMarch 13, 2020
On February 19, 2020, the European Commission released its White Paper on Artificial Intelligence (A European approach to excellence and trust), articulating the European Union’s vision on how to best address the risk and opportunities associated with artificial intelligence (AI) technologies.1 The White Paper is one of many recent efforts by countries and organizations to address the rapid development and adoption of AI. Companies developing and incorporating AI technology into their products and services should stay abreast of these efforts, as they are laying the groundwork for potential industry and regulatory standards.
To promote the adoption of AI while addressing risks associated with certain AI applications, the Commission proposed creating a regulatory framework that will create a unique “ecosystem of trust” surrounding AI technology and a policy framework that will mobilize resources across Europe to achieve an “ecosystem of excellence” along the entire AI value chain.
Ecosystem of Trust. The Commission has concerns that the “opacity, complexity, unpredictability, and partially autonomous behavior” of AI may undermine fundamental rights and make it difficult for individuals and enforcement authorities to verify whether entities in the AI supply chain are compliant with EU legal obligations. Additionally, when embedded in certain products, AI could create legal uncertainty regarding liability for safety incidents within the supply chain.
The Commission identified several possible adjustments to its existing legislative framework, including extending the scope of existing product safety rules to cover stand-alone software and services; developing a mechanism to incentivize companies and other organizations to address new risks introduced through an after-market software update; and clarifying whether responsibility attaches if AI is added after a product is placed on the market by a party that is not the producer.
Additionally, the Commission proposed adopting new regulations to address issues associated with AI. The new mandatory regulations would only apply to “high-risk” AI applications. An AI application is considered high-risk if:
- Significant risks are expected to occur in the sector in which AI is used (e.g. healthcare, transport, energy, some public sector areas), and
- Significant risks are likely to arise from the way in which the AI is used.
In addition, the Commission envisioned that there may be “exceptional instances” where the use of AI may likewise be deemed high-risk, such as remote biometric identification or where it affects workers’ rights.2 Regulation of high-risk AI would be binding. However, the Commission is also considering voluntary certificated standards for non-high risk AI applications. Although voluntary, once an entity involved in non-high-risk AI is certified under these standards, the requirements would become binding.
The mandatory requirements for high-risk AI applications could include ensuring the data sets used to train the AI technology meet safety, discrimination, and privacy standards. The new regulations could also require retention of data sets and records regarding the usage of data sets, as well as imposing requirements to ensure that AI systems are accurate and their results are reproducible. The Commission also proposed to ensure adequate access to information regarding an AI system’s capabilities and limitations for users, and to guarantee that citizens receive adequate notice when interacting with an AI system. Additionally, the Commission is considering when human oversight should be mandated for high-risk AI applications.
Under the proposal, each new obligation would be “addressed to the actor(s) who is (are) best placed to address any potential risk,” regardless of which party is liable to the end user. Additionally, the requirements would apply to all relevant economic operators providing AI-enabled products in the EU, whether based within or outside the EU.
Ecosystem of Excellence. In addition to this regulatory framework, the Commission laid out a policy framework to align efforts at the European, national, and regional level to achieve an “ecosystem of excellence” along the entire value chain. These measures include increasing research and development efforts, including AI “excellence and testing centres” and new public-private partnerships in AI, data, and robotics. Additionally, the Commission called for increased education and training infrastructure to develop a workforce with skills necessary to develop AI.
The White Paper is open for public consultation until May 19, 2020, at which point the Commission will begin work on final regulations. Any final regulations will need to be approved by the European Parliament and national governments, which would likely not happen until next year
1 The White Paper defines AI as “a collection of technologies that combine data, algorithms, and computing power.” White Paper, p. 2.
2 The Commission is planning to “launch a broad European debate on the specific circumstances, if any, which might justify [the use of AI for remote biometric identification purposes], and on common safeguards.” White Paper, p. 22.
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Steve Bunnell, an O’Melveny partner licensed to practice law in the District of Columbia, Lisa Monaco, an O’Melveny partner licensed to practice law in the District of Columbia and New York, Riccardo Celli, an O’Melveny partner licensed to practice law in England, Wales, and Italy, John Dermody, an O’Melveny counsel licensed to practice law in California, and Kristin R. Marshall, an O’Melveny associate licensed to practice law in the District of Columbia and Missouri, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2020 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.
Thank you for your interest. Before you communicate with one of our attorneys, please note: Any comments our attorneys share with you are general information and not legal advice. No attorney-client relationship will exist between you or your business and O’Melveny or any of its attorneys unless conflicts have been cleared, our management has given its approval, and an engagement letter has been signed. Meanwhile, you agree: we have no duty to advise you or provide you with legal assistance; you will not divulge any confidences or send any confidential or sensitive information to our attorneys (we are not in a position to keep it confidential and might be required to convey it to our clients); and, you may not use this contact to attempt to disqualify O’Melveny from representing other clients adverse to you or your business. By clicking "accept" you acknowledge receipt and agree to all of the terms of this paragraph and our Disclaimer.