alerts & publications
US Government’s AI Policy Coming into Focus in Series of Recent Announcements5月 15, 2023
On May 4, the White House announced a number of new actions “to promote responsible AI innovation that protects Americans’ rights and safety,” including a commitment from leading generative AI companies to submit models for public evaluation.1 The announcement came on the heels of statements about AI regulation from the Federal Trade Commission and several other agencies, as well as AI guidance and policy statements issued by the administration in the last few months. Taken together, these statements and initiatives clarify the federal government’s emerging AI policy. Several themes emerge from these recent developments:
- With little clarity on when or if Congress will pass new AI legislation, the current focus is on (1) applying existing laws and regulations to address potential AI harms, (2) using voluntary industry compliance with non-binding guidance to mitigate AI risks and promote responsible AI development, and (3) using the federal government’s own AI deployment and investments to set a positive example for the private sector.
- The administration appears particularly focused on several perceived AI risks, including (1) algorithmic discrimination, (2) risks to user privacy, (3) lack of transparency over when or how AI is being used, (4) lack of widespread access to computing and data inputs necessary to develop AI, and (5) the potential for bad actors to use AI tools for fraud, misinformation campaigns, and other misdeeds.
In this alert, we summarize recent government actions and announcements on AI and highlight the emerging high-level themes in the Biden administration’s AI policy. In future alerts, we plan to bring you in-depth analysis on specific areas of law implicated by AI, including intellectual property, data privacy, anti-discrimination laws, securities laws, and more.
I. White House Statement
On May 4, the White House announced a number of AI-related initiatives:
- Public assessment of existing generative AI systems. The administration announced that leading AI developers had voluntarily committed “to participate in a public evaluation of AI systems, consistent with responsible disclosure principles” which “will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework.”2 As we explain below, the Blueprint and the Risk Management Framework were released by the government in the past year to promote trustworthy AI and lay out core AI policy principles such as protection from algorithmic discrimination, emphasis on data privacy, and notice and explanation when AI is being used. And as detailed in a prior O’Melveny alert, public assessment of generative AI models has been near the top of some advocacy groups’ regulatory wishlists, with the Center for Artificial Intelligence and Digital Policy even filing a complaint asking the FTC to require an independent assessment of ChatGPT.3
- Meetings with leading AI companies. Vice President Kamala Harris met with the CEOs of four leading AI companies—Alphabet, Anthropic, Microsoft, and OpenAI.4 After the meeting, the Vice President released a statement indicating that her message to the AI executives had been that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products” and “every company must comply with existing laws to protect the American people.”5 The Vice President’s statement also referenced “advancing potential new regulations and supporting new legislation,” but did not offer any specifics.6
- Upcoming policy guidance on the use of AI by US the government. The Office of Management and Budget announced that it will release draft policy guidance on the use of AI systems by the US government for public comment this summer.7 According to the White House statement, this guidance will also “serve as a model for…businesses and others to follow in their own procurement and use of AI.”8
- Investments in AI research & development. The National Science Foundation announced that it would spend $140 million to fund seven new National AI Research Institutes, bringing the total number of institutes to 25.9 The new institutes will focus on six areas of research: (1) trustworthy AI (AI that incorporates ethics and respect for human rights); (2) use of AI in cybersecurity; (3) use of AI in climate-smart agriculture and forestry; (4) interdisciplinary research in neuroscience, cognitive science, and AI; (5) use of AI to aid decision-making; and (6) use of AI in education.10
II. Biden-Harris Administration’s Prior Efforts in AI
The May 4 White House announcement continues the administration’s AI policy of emphasizing the applicability of existing laws, issuing voluntary guidance and policy statements, and leading by example through government initiatives focused on responsible AI R&D and deployment.
- Blueprint for an AI Bill of Rights.11 On October 4, 2022, the White House Office of Science and Technology Policy published a policy blueprint which identifies five principles for AI design, use, and deployment:
(i) ensuring safe and effective systems through pre-deployment testing, risk identification and mitigation, and ongoing monitoring;
(ii) protection from algorithmic discrimination through, e.g., proactive equity assessment, use of representative data, and protection against use of proxies for demographic features;
(iii) emphasis on data privacy through design choices that ensure privacy by default and require user permission for the collection and use of personal data;
(iv) providing notice that AI is being used and explanation of how and why AI contributed to decisions; and
(v) the right to opt out and have access to a human alternative to AI, where appropriate.
- Roadmap for standing up a National AI Research Resource.12 On January 24, 2023, the White House announced that the National Artificial Intelligence Research Resource Task Force released An Implementation Plan for a National AI Research Resource (“NAIRR”). The plan explains that “[m]uch of today’s AI research relies on access to large volumes of data and advanced computational power, which are often unavailable to researchers beyond those at well-resourced technology companies and universities.”13 To improve access, NAIRR intends to provide a “widely accessible AI research cyberinfrastructure…to democratize the AI research and development (R&D) landscape…with four measurable goals in mind, namely to (1) spur innovation, (2) increase diversity of talent, (3) improve capacity, and (4) advance trustworthy AI.”14
- AI Risk Management Framework.15 On January 26, 2023, the National Institute of Standards and Technology of the Department of Commerce released the AI Risk Management Framework, a voluntary, non-sector-specific resource “designed to equip organizations and individuals… with approaches that increase the trustworthiness of AI systems.”16 According to the Framework, trustworthy AI must be “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with… harmful biases managed.”17
III. Recent AI Announcements by the FTC and Other Agencies
The FTC and other federal agencies appear to be following the same playbook as the White House, emphasizing the applicability of existing rules rather than creating new ones.
Joint Announcement by the Consumer Finance Protection Bureau, Department of Justice Civil Rights Division, Equal Employment Opportunity Commission, and the FTC
On April 25, 2023, FTC Chair Lina M. Khan and officials from three other federal agencies issued a joint statement warning that automated AI systems may contribute to unlawful discrimination and violate federal law, and specified three particular issues of concern:18
- Data Bias. AI outcomes can be skewed by unrepresentative or imbalanced datasets, or datasets that incorporate existing biases or other types of errors. AI systems can also, intentionally or not, correlate data with protected classes, which can in turn lead to discriminatory outcomes and prohibited practices.
- Lack of Transparency. Many AI systems are “black boxes” whose internal workings are not clear to users and, in some cases, even the developers of the AI tools themselves. This lack of transparency often makes it challenging for developers, businesses, and individuals to know what criteria and data the system is relying on for its outputs and whether the system is fair or biased.
- Flawed Design and Use. Developers may design AI systems based on flawed assumptions about their users, relevant context, or the underlying practices or procedures the AI systems may replace.
The agencies emphasized that “[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices,”19 highlighting recent agency actions and statements on the applicability of specific existing rules to AI:
- CFPB’s circular “confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used”;20
- DOJ’s “statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services”;21
- EEOC’s “technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees”;22
- FTC guidance that market participants who “use automated tools that have discriminatory impacts,” “make claims about AI that are not substantiated,” or “deploy AI before taking steps to assess and mitigate risks” may be violating the FTC Act;23 and
- FTC settlements requiring companies to destroy algorithms “that were trained on data that should not have been collected.”24
FTC Chair Lina Khan’s Statements
On May 3, 2023, FTC Chair Lina Khan, published an opinion piece in The New York Times,25 highlighting several AI-related risks of particular interest to the FTC:
- Using control over AI inputs to entrench dominance. Firms that control resources necessary to develop AI such as cloud services, computing power, and large datasets “could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance.”26
- Algorithmic pricing. Use of AI tools to determine prices across a variety of sectors of the economy “can facilitate collusive behavior that unfairly inflates prices—as well as forms of precisely targeted price discrimination.”27 The emphasis on price discrimination comes amidst the FTC’s efforts to reinvigorate enforcement of the Robinson-Patman Act, a statute targeting price discrimination that has been largely unused by previous administrations.28
- Fraud. AI tools can “turbocharge fraud,” e.g., by “crafting seemingly authentic messages” and enabling “scammers to generate content quickly and cheaply.”29 Notably, Khan emphasized that the FTC “will look not just at the fly-by-night scammers deploying these [AI] tools but also at the upstream firms that are enabling them.”30
- AI training data may lead to discrimination and privacy harms. When datasets used to train AI models contains errors or bias, AI tools “risk automating discrimination” and if data used to train AI contains private communications or other sensitive information, AI tools can also violate user privacy.31 Khan emphasized that “[e]xisting laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.”32
In an interview with MSNBC, Khan emphasized the use of existing tools—including laws prohibiting deception, discrimination, unfair practices, and collusion—to go after AI-related harms, noting that “AI may be new but it doesn’t enjoy some type of legal exemption or legal shield from existing rules and existing rules will apply.”33 Despite the emphasis on existing tools, Khan also hinted that aggressive and proactive enforcement may be merited: Khan lamented regulatory inaction at “the onset of the Web 2.0 era in the mid-2000s” and commended antitrust scrutiny that led to IBM unbundling software and hardware and AT&T opening up its patent vault as offering “lessons for how to handle technological disruption for the benefit of all.”34
IV. Legislative Initiatives
While the recent announcements from the executive branch emphasize the application of existing rules to AI, others have expressed concern that if the US does not enact new AI laws soon, other jurisdictions will drive the agenda on regulating this new technology. Senator Mark Warner (D-VA) expressed this view recently, saying, “If we’re not careful, we could end up ceding American policy leadership to the EU again [referencing data privacy regulation]. So it is a race.”35
Last month, Senate Majority Leader Chuck Schumer (D-NY) announced a legislative push to regulate AI.36 His framework includes four principles: (1) “identification of who trained the algorithm and who its intended audience is”; (2) “disclosure of its data source”; (3) “an explanation for how it arrives at its responses”; and (4) “transparent and strong ethical boundaries.”37 Mirroring the emphasis on public evaluation of AI models in White House’s May 4 announcement, Schumer’s proposal would “require companies to allow independent experts to review and test AI technologies ahead of a public release or update.”38 There is little additional detail available, but Schumer says he plans to refine the framework “in conjunction with stakeholders from academia, advocacy organizations, industry, and the government.”39
Some existing legislative proposals already touch on AI. The American Data Privacy and Protection Act, which was passed out of the House Energy and Commerce Committee last summer but has not yet been reintroduced in 2023, contains a provision requiring that certain algorithms undergo design evaluations and impact assessments to address the risk of algorithmic discrimination and other potential harms.40
A number of states have passed data privacy laws that provide consumers the right to opt out of automated decision-making and profiling. For example, the California Privacy Rights Act directs the newly created California Privacy Protection Agency (CPPA) to create regulations governing “access and opt-out rights with respect to businesses’ use of automated decision-making technology.”41 The statute states that this must include “profiling” and indicates that covered entities42 responding to access requests must include “meaningful information about the logic involved in those decision-making processes” and a “description of the likely outcome of the process with respect to the consumer.”43 In February 2023, the CPPA issued an invitation for preliminary comments on regulations for the use of automated decision-making and profiling,44 and is likely to consider the use of AI for these purposes in formulating the regulations.
When a transformative technology such as AI comes along, regulators face a difficult dilemma. Imposing strict rules before the technology is fully understood can squash innovation and interfere with market forces; but waiting too long to regulate could cement undesired business models and practices that may be difficult to later undo. Recent announcements bring into focus the federal government’s current approach to managing this tension: identifying potential harms from AI, using existing laws and regulations to address these harms, relying on voluntary industry compliance with non-binding guidance, and investing in AI to ensure equitable and responsible development. At the same time, there is continued discussion of new AI legislation, so more drastic changes may be on the horizon.
For companies developing or implementing AI tools, it is imperative to make sure that the tools comply with existing laws and regulations. But it is also important to take note of the key principles that repeatedly crop up in voluntary guidance and government statements, like preventing or mitigating algorithmic discrimination, designing AI tools to respect user privacy, being transparent about when and how AI is being used, and mitigating the potential risk of AI tools being used for illegal purposes such as fraud. The fact that these core principles repeatedly come up in AI-related guidance, frameworks, and enforcer statements may reflect an emerging consensus on the pillars of AI regulation, and provides an invaluable signpost on the approach future AI legislation and regulation may take.
A cross-disciplinary team of O’Melveny attorneys is closely tracking legal and regulatory developments in AI, bringing our expertise in antitrust, consumer protection, privacy and data security, appellate law, securities law compliance, and other fields to this novel area of law. Please contact the attorneys listed on this article or your O’Melveny counsel to help you navigate AI-related legal and strategic issues.
1The White House, FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety (May 4, 2023), available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/.
3O’Melveny & Myers LLP, Advocacy Group Files Complaint Urging FTC to Halt GPT-4 Amid Growing Pressure to Regulate Generative AI (Apr. 4, 2023), available at https://www.omm.com/resources/alerts-and-publications/alerts/advocacy-group-files-complaint-urging-ftc-to-halt-gpt-4-amid-growing-pressure-to-regulate-generative/.
4The White House, FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety (May 4, 2023).
5The White House, Statement from Vice President Harris After Meeting with CEOs on Advancing Responsible Artificial Intelligence Innovation (May 4, 2003), available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/statement-from-vice-president-harris-after-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.
7The White House, FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety.
10Nat’l Science Foundation, Press Release, NSF Announces 7 New Artificial Intelligence Research Institutes (May 4, 2023), available at https://new.nsf.gov/news/nsf-announces-7-new-national-artificial.
11The White House, Blueprint for an AI Bill of Rights, available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
12Nat’l Artificial Intelligence Research Resource Task Force, Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource (Jan. 24, 2023), available at https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf.
13Id. at ii.
14Id. at iv-v.
15Nat’l Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (Jan. 2023), available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
16Id. at 2.
17Id. at 2-3.
18Fed. Trade Comm’n, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
19Id. at 1.
20Id. at 2; Consumer Financial Protection Bureau, Consumer Financial Protection Circular 2022-03: Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms, available at https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms/.
21Fed. Trade Comm’n, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023); Dep’t of Justice, Justice Department Files Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screening Practices (Jan. 9, 2023), available at https://www.justice.gov/opa/pr/justice-department-files-statement-interest-fair-housing-act-case-alleging-unlawful-algorithm.
22Fed. Trade Comm’n, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023); U.S. Equal Employment & Opportunity Comm’n, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), available at https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.
23Fed. Trade Comm’n, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023) at 2–3; Fed. Trade Comm’n, Keep your AI claims in check (Feb. 27, 2023), available at https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check; Fed. Trade Comm’n, Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale (Mar. 20, 2023), available at https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.
24Fed. Trade Comm’n, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023) at 3; Fed. Trade Comm’n, FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology (May 7, 2021), available at https://www.ftc.gov/news-events/news/press-releases/2021/05/ftc-finalizes-settlement-photo-app-developer-related-misuse-facial-recognition-technology; Fed. Trade Comm’n, FTC Takes Action Against Company Formerly Known as Weight Watchers for Illegally Collecting Kids’ Sensitive Health Data (Mar. 4, 2022), available at https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive.
25Lina Khan, We Must Regulate A.I. Here’s How, N.Y. TIMES (May 3, 2023) https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html
28See Fed. Trade Comm’n, “Returning to Fairness” — Prepared Remarks of Commissioner Alvaro M. Bedoya at the Midwest Forum on Fair Markets (Sep. 22, 2022), available at https://www.ftc.gov/news-events/news/speeches/returning-fairness-prepared-remarks-commissioner-alvaro-m-bedoya-midwest-forum-fair-markets; Fed. Trade Comm’n, Policy Statement of the Federal Trade Commission on Rebates and Fees in Exchange for Excluding Lower-Cost Drug Products (Jun. 16, 2022), available at https://www.ftc.gov/legal-library/browse/policy-statement-federal-trade-commission-rebates-fees-exchange-excluding-lower-cost-drug-products.
29Lina Khan, We Must Regulate A.I. Here’s How.
33Morning Joe, Interview with Lina Khan (May 4, 2023), available at https://www.youtube.com/watch?v=mZ_75IH_7Ms.
34Lina Khan, We Must Regulate A.I. Here’s How.
35Brendan Bordelon & Mohar Chatterjee, “It’s Got Everyone’s Attention”: Inside Congress’s Struggle to Rein in AI, POLITICO (May 4, 2023), available at https://www.politico.com/news/2023/05/03/congresss-scramble-build-ai-agenda-00095135.
36Schumer Launches Major Effort to Get Ahead of Artificial Intelligence, SENATE DEMOCRATS (Apr. 13, 2023), available at https://www.democrats.senate.gov/newsroom/press-releases/schumer-launches-major-effort-to-get-ahead-of-artificial-intelligence.
37Andrew Solender & Ashley Gold, Scoop: Schumer Lays Groundwork for Congress to Regulate AI, AXIOS (Apr. 13, 2023), available at https://www.axios.com/2023/04/13/congress-regulate-ai-tech?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosam&stream=top. (Note: Schumer’s press release does not describe the guardrails in as much detail but links to this article).
38Schumer Launches Major Effort to Get Ahead of Artificial Intelligence.
40American Data Privacy and Protection Act (H.R. 8152), available at https://www.congress.gov/bill/117th-congress/house-bill/8152/text#toc-HD58970CD67B741C891BC0E71CD547070.
41Cal. Civ. Code §1798.185(a)(16).
42The CPRA applies to for-profit entities doing business in California that meet one of the following thresholds: (a) as of January 1 of the calendar year, had annual gross revenues in excess of twenty-five million dollars ($25,000,000) in the preceding calendar year; (b) alone or in combination, annually buys, sells, or shares, the personal information of 100,000 or more consumers or, households, or (c) derives 50 percent or more of its annual revenues from selling or sharing consumers’ personal information. Cal. Civ. Code §1798.140(d).
43Cal. Civ. Code. §§ 1798.185(a)(16).
44Cal. Privacy Protection Agency, Invitation for Preliminary Comments on Proposed Rulemaking Cybersecurity Audits, Risk Assessments, and Automated Decisionmaking (Feb. 10, 2023), available at https://cppa.ca.gov/regulations/pdf/invitation_for_comments_pr_02-2023.pdf.
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Sergei Zaslavsky, an O’Melveny partner licensed to practice law in the District of Columbia and Maryland, Stephen McIntyre, an O’Melveny partner licensed to practice law in California, William K. Pao, an O'Melveny partner licensed to practice law in California, Nexus U. Sea, an O'Melveny partner licensed to practice law in New York and New Jersey, Wenting Yu, an O'Melveny partner licensed to practice law in California and New York, Scott W. Pink, an O'Melveny special counsel licensed to practice law in California and Illinois, Amit Itai, an O'Melveny associate licensed to practice law in California and Israel, and Laura K. Kaufmann, an O’Melveny associate licensed to practice law in California, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2023 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.
Thank you for your interest. Before you communicate with one of our attorneys, please note: Any comments our attorneys share with you are general information and not legal advice. No attorney-client relationship will exist between you or your business and O’Melveny or any of its attorneys unless conflicts have been cleared, our management has given its approval, and an engagement letter has been signed. Meanwhile, you agree: we have no duty to advise you or provide you with legal assistance; you will not divulge any confidences or send any confidential or sensitive information to our attorneys (we are not in a position to keep it confidential and might be required to convey it to our clients); and, you may not use this contact to attempt to disqualify O’Melveny from representing other clients adverse to you or your business. By clicking "accept" you acknowledge receipt and agree to all of the terms of this paragraph and our Disclaimer.