Advocacy Group Files Complaint Urging FTC to Halt GPT-4 Amid Growing Pressure to Regulate Generative AI
April 4, 2023
On March 30, 2023, the Center for Artificial Intelligence and Digital Policy (“CAIDP”) filed a public complaint urging the Federal Trade Commission to halt GPT-4 commercial deployment and investigate OpenAI for alleged violations of Section 5 of the FTC Act, FTC’s AI guidance, and emerging AI governance norms.
GPT-4 is the latest and most powerful version of OpenAI’s language model. GPT stands for Generative Pre-trained Transformer—a language model that uses deep learning to generate conversational text in response to prompts. OpenAI, a startup backed by Microsoft, has released a chatbot (ChatGPT) powered by an earlier version of its GPT model, and Microsoft has incorporated a GPT-powered chatbot into its Bing search engine. Other companies, both inside and outside the United States, are also racing to incorporate AI language models into their products. For example, Google unveiled its Generative AI chatbot Bard on February 6 and released it on March 21, and Chinese search company Baidu released its AI chatbot, ERNIE Bot, on March 16.
The rapid development of Generative AI technology has sparked a vigorous public debate both about the technology’s immense promise and its potential risks, as well as whether and how to regulate this emerging technology. The path forward is far from clear: while traditional competition policy generally favors rapid development of innovative technology, some voices are calling for a slowdown or even a pause to Generative AI development to allow more time to address risks and develop guardrails. Regardless of the outcome of this policy debate, counsel whose companies develop Generative AI, are considering implementing Generative AI tools, or are users of Generative AI will need to keep a close eye on the rapidly evolving legal and regulatory landscape.
I. CAIDP’s FTC Complaint
CAIDP’s Complaint highlights many of the potential legal and regulatory issues raised by critics of Generative AI’s rapid development and deployment. The Complaint alleges that GPT-4 is “biased, deceptive,” “a risk to privacy and public safety,” and fails to satisfy FTC guidance that AI’s use should be “transparent, explainable, fair, and empirically sound while fostering accountability.” The Complaint points to the following issues allegedly implicated by GPT-4’s deployment:
- Bias. According to the Complaint, OpenAI released GPT-4 to the public despite knowing the product’s risk for bias. It points to OpenAI’s own statements as evidence: “We found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.” According to the Complaint, this bias originates, in part, from the large, uncurated data sets that the models are trained on that themselves include biased views.
- Privacy. The Complaint raises several privacy concerns with OpenAI’s Generative AI and GPT-4 in particular. First, the Complaint alleges that “[o]ther than posting a street address to receive complaints by paper mail, OpenAI appears to be entirely unaware of the GDPR”—an EU data protection and privacy law that sets forth obligations for data controllers and rights for individuals whose personal data is processed (data subjects). The Complaint mentions several specific rights data subjects have with regard to their personal data under the GDPR that are potentially implicated by GPT-4: “access to information about the rights of data subjects, including the right to access, right to re[c]tification, right to erasure (also known as ‘the right to be forgotten’), right to object, and purpose limitation.” Second, the Complaint points to alleged OpenAI “[p]rivacy snafu[s]”– for example, exposing users’ private chat histories to other users. Finally, the Complaint argues that GPT-4’s expected ability to provide text responses from photo inputs “has staggering implications for personal privacy and personal autonomy, as it would give the user of GPT-4 the ability not only to link an image of a person to detailed personal data, available in the model, but also for OpenAI’s product GPT-4 to make recommendations and assessments, in a conversational manner, regarding the person.”
- Transparency. The Complaint claims that unlike previous releases and in contravention of the general norm, “OpenAI has not disclosed details about [GPT-4’s] architecture, model size, hardware, computing resources, training techniques, dataset construction, or training methods.” Without this information, the Complaint alleges, researchers are unable to adequately test the model or predict what harms could emerge from its use. Although the Complaint faults OpenAI’s transparency, it is noteworthy that the Complaint extensively quotes from two reports provided by OpenAI, the GPT-4 Technical Report and GPT-4 System Card, that detail the development and safety concerns of GPT-4.
- Cybersecurity. The Complaint alleges that OpenAI has not adequately addressed cybersecurity concerns with Generative AI, citing OpenAI’s own assessment that GPT-4 “does continue the trend of potentially lowering the cost of certain steps of a successful cyberattack, such as through social engineering.” Additionally, the Complaint alleges that “[t]hrough GPT-4, OpenAI gathers internal corporate trade secrets.” It described an incident at Amazon where ChatGPT generated text that “closely” resembled internal company data that was previously inputted by employees.
- Public Safety. The Complaint warns that the race to develop AI could “increase the risk of a catastrophic event.” OpenAI has stated that “[o]ne concern of particular importance to OpenAI is the risk of racing dynamics leading to decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI.” The Complaint also cites the GPT-4 System Card for the warning that powerful Generative AI models can exhibit novel capabilities like the ability to seek power post-release that were not previously identified in development.
- Consumer Protection. The Complaint alleges that Generative AI raises important consumer protection concerns. For example, as Generative AI develops to sound more “human,” consumers could be deceived into buying items that they otherwise would not.
- Children’s Safety. The Complaint alleges that Generative AI can compromise children’s safety (e.g., by instructing a child on how to cover up a bruise or how to lie to a parent) and that OpenAI has not provided any details of safety checks or measures implemented to protect children using its products.
- Deception. The Complaint encourages the FTC to treat GPT-4 misinformation and fabrications under the rubric of deception. According to the Complaint, GPT-4’s deception includes that it “maintains a tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly” and “it often exhibits these tendencies in ways that are more convincing and believable than earlier GPT models.”
In light of these risks, OpenAI’s Usage Policy prohibits the use of its models for certain activities and requires that consumer-facing uses of its models in certain industries or for news generation and summarization “provide a disclaimer to users informing them that AI is being used and of its potential limitations.” The Complaint describes such language as an unconscionable attempt “to disclaim, by means of a Usage Policy, unlawful, deceptive, unfair, and dangerous applications of its product that would be self-evident to many users.”
The Complaint asks the FTC:
(a) to open an investigation and find that GPT-4’s commercial release violates Section 5 of the FTC Act (which prohibits unfair and deceptive practices), FTC’s AI guidance, and emerging AI governance norms;
(b) to “[h]alt further commercial deployment of GPT by Open AI”; and
(c) to require an independent assessment of GPT, establish an incident reporting mechanism for GPT, require compliance with FTC guidance, and “[i]nitiate a rulemaking to establish baseline standards for products in the Generative AI market sector.”
It is not clear that FTC has the authority to grant all of the requested relief, even if it were inclined to do so, and the relief it is capable of granting would likely take considerable time, meaning the Complaint is unlikely to succeed in its goal of pausing GPT deployment in the immediate future. FTC can take the following actions in response to the Complaint:
- Investigation. FTC can open a law enforcement investigation under Section 20 of the FTC Act and issue civil investigative demands to investigate allegations of unfair or deceptive practices by OpenAI. Even if FTC does not agree that the Complaint plausibly alleges violations of Section 5 of the FTC Act, it could still open a more general investigation into Generative AI under Section 6(b) of the FTC Act, which enables it to require companies to file special reports so that FTC can “conduct wide-ranging studies that do not have a specific law enforcement purpose.”
- Injunction. FTC can seek an injunction (either in federal court under FTC Act Section 13(b) or as a cease-and-desist remedy in an administrative proceeding subject to appellate judicial review under FTC Act Section 5(b)) if it has reason to believe that Section 5 of the FTC Act is being violated, but it would likely need an investigation to determine if it has reason to believe the law is being violated. If FTC obtains an order stopping a violation, the order may go beyond just enjoining the violation and include other “fencing-in” requirements to prevent the defendant entity from getting around the order.
- Rulemaking. FTC can initiate rulemaking under FTC Act Section 18 to define what practices it considers to be unfair or deceptive under Section 5. The Commission may initiate a Section 18 rulemaking when it has reason to believe that the practices to be addressed by the rulemaking are “prevalent.” Any rulemaking would likely take a significant amount of time to complete given the multitude of viewpoints on Generative AI and the fact that FTC is required to “allow interested persons to submit written data, views, and arguments” and “provide an opportunity for an informal hearing” under 15 U.S.C. Sec. 57a(b)(1).
II. The Wide-Ranging Conversation Regarding Generative AI Risks and Regulation
The CAIDP Complaint did not occur in a vacuum; it is part of a wide-ranging dialogue about the appropriate government response to the immense promise and risk of Generative AI technology.
Many of the issues raised by the CAIDP Complaint are being raised by other public interest groups, industry participants, and governments. For example, on March 31, the Italian Data Protection Authority ordered OpenAI to stop collecting Italian users’ data, citing the alleged lack of legal justification for collection and use of personal data for training OpenAI’s models and the lack of age verification to ensure children are not exposed to inappropriate content. It has been reported that French and Irish regulators have reached out to Italian counterparts to learn more about the basis for the ban, and Germany could follow Italy in banning ChatGPT due to data security concerns.
The CAIDP Complaint is also not the only public plea to slow down the development and deployment of Generative AI. The Future of Life Institute recently published an open letter urging the industry to pause development of AI systems more powerful than GPT-4 for at least six months due to the “profound risks to society and humanity.” The letter urges that “AI labs and independent experts . . . use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” and suggests that AI developers work with policymakers “to dramatically accelerate the development of robust governance systems,” including “regulatory authorities dedicated to AI” and “oversight and tracking” of AI systems.” As of April 3, the letter has garnered more than 3,000 signatures, including from Elon Musk, Steve Wozniak (co-founder of Apple), and Andrew Yang.
Calls for regulation are coming from traditionally pro-business organizations as well, with the Chamber of Commerce assembling an AI Commission that recently released a report calling for “initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment,” and cautioning that “[a] failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development andintroduction of beneficial technologies.”
Other jurisdictions may be ahead of the US in regulating Generative AI. In April 2021, the European Commission proposed the Artificial Intelligence Act—“the first law on AI by a major regulator anywhere.” The proposed law takes a risk-based approach to regulation, classifying AI applications based on differing levels of risk and then assigning requirements and obligations depending on the risk level. The Act is expected to be passed this year, but lawmakers are still debating how to regulate Generative AI under the Act. Additionally, the United Kingdom’s Department of Science, Innovation & Technology recently released a white paper that set forth five principles to guide the use of AI in the United Kingdom including safety, transparency, fairness, accountability, and contestability.
FTC has been studying the AI industry from a consumer protection lens for several years, focusing largely on the potential for algorithms to promote deception and racial bias. In 2021, FTC issued guidance warning businesses to avoid AI tools that produce “troubling outcomes –including discrimination by race or other legally protected practices.” In 2022, FTC issued a report to Congress warning about the use of AI by Big Tech companies and the potential for “inaccurate, biased, and discriminatory” outcomes. In 2023, FTC issued business guidance advising AI developers to avoid making false or deceptive advertising claims that exaggerate or “overpromise what [the] algorithm or AI-based tool can deliver.”
FTC and DOJ are also studying the explosion of Generative AI from a competition policy perspective. In March 2023, FTC issued a Solicitation for Public Comments on the Business Practices of Cloud Computing Providers. Cloud computing is an important tool for developing AI models. FTC is seeking to better understand issues related to “market power, business practices affecting competition, and potential security risks.” The FTC inquiry includes questions on the “types of products or services . . . cloud providers offer based on, dependent on, or related to artificial intelligence (AI)” and “[the extent to which] AI products or services [are] dependent on the cloud provider they are built on.”
FTC and DOJ leaders’ recent statements also highlight their concerns about competition in the AI sector. In response to a ChatGPT-related question at the recent Annual Spring Enforcers Summit, FTC Chairwoman Lina Khan commented that “as you have machine learning that depends on huge amounts of data and also depends on huge amounts of storage, we need to be very vigilant to make sure this is not just another site for the big companies becoming bigger and really squelching their rivals.” Khan observed that, “this is another [technological] transition that we’re looking at closely to make sure that if this is an opportunity for competition to really enter the market and disrupt, that we’re allowing that to happen rather than illegal tactics locking up the market.” Jonathan Kanter, head of the DOJ Antitrust Division, also expressed concern about AI competition at the Enforcer’s Summit, noting that “AI . . . is inherently dependent on scale” and “markets that are inherently dependent on scale often present a greater risk of tipping . . . and barrier to entry.”
Of course, it remains difficult to predict the direction US will take in regulating Generative AI, particularly in light of the many competing policy concerns and considerations. For instance, the CAIDP and some other voices in the AI world are calling for slowing down the development and deployment of AI to provide more time to develop appropriate safeguards and guardrails. Competition policy typically has the opposite imperative: ensuring open competition to maximize output (up to economically efficient levels) and accelerate innovation as rival firms vie for leadership. The CAIDP Complaint makes this potential tension clear when it quotes OpenAI chief scientist Ilya Sutskever’s position that “[i]t would be highly desirable to end up in a world where companies come up with some kind of process that allows for slower releases of models with these completely unprecedented capabilities”—arguably an agreement to reduce output that would be anathema from a competition perspective but may potentially be justified by larger societal concerns if implemented by regulators rather than private actors.
Geopolitical competition is another relevant consideration: as the Chamber of Commerce AI Commission report notes, “United States faces stiff competition from China in AI development” and “it is unclear which nation will emerge as the global leader, raising significant security concerns for the United States and its allies.” Pausing AI development may increase the risk that US will lose its leadership position to geopolitical rivals, and the same concerns may apply to policies that hamstring large US technology companies and potentially give an advantage to Chinese counterparts. Conflicting policy concerns create a difficult road for regulators, and a challenge both for advocates of regulation and the companies potentially affected by regulation.
A cross-disciplinary team of O’Melveny attorneys is closely tracking the legal and regulatory developments in Generative AI, bringing their expertise in antitrust, consumer protection, privacy and data security, appellate law, and other fields to this novel area of law. Please contact the attorneys listed on this article or your O’Melveny counsel to help you navigate the novel legal and strategic issues raised by Generative AI.
Compl., In the Matter of OpenAI ¶¶ 167–68 (Mar. 30, 2023), available at https://cdn.arstechnica.net/wp-content/uploads/2023/03/CAIDP-FTC-Complaint-OpenAI-GPT-033023.pdf.
Compl. ¶ 1.
Compl. ¶¶ 2, 154 quoting Andrew Smith, Director, FTC Bureau of Consumer Protection, Using Artificial Intelligence and Algorithms, FED. TRADE COMM’N (Apr. 8, 2020), available at https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms.
Compl. ¶¶ 44, 46.
Compl. ¶ 44, quoting GPT-4 SYSTEM CARD, OPEN AI 7 (Mar. 23, 2023), available at https://cdn.openai.com/papers/gpt-4-system-card.pdf (“GPT-4 System Card”).
See Compl. ¶ 43.
Compl. ¶ 98.
Compl. ¶ 92.
Compl. ¶ 99.
Compl. ¶ 102.
Compl. ¶ 108.
Compl. ¶ 109.
Compl. ¶ 68, quoting GPT-4 System Card at 3.
Compl. ¶ 65.
Compl. ¶ 116.
Compl. ¶ 114, quoting GPT-4 System Card at 19.
Compl. ¶ 113.
See generally Compl. ¶¶ 53–59.
Compl. ¶ 57.
Compl. ¶¶ 50, 52.
Compl. ¶ 71.
Compl. ¶ 85, quoting GPT-4 System Card at 19.
Usage Policies, OPENAI (Mar. 23, 2023), available at https://openai.com/policies/usage-policies.
Compl. ¶¶ 145, 146.
A Brief Overview of the Federal Trade Commission’s Investigative, Law Enforcement, and Rulemaking Authority, FED. TRADE COMM’N (May 2021), available at https://www.ftc.gov/about- ftc/mission/enforcement-authority.
15 U.S.C. 57a(b)(3).
Press Release, Artificial Intelligence: Stop To ChatGPT By The Italian SA
Personal Data Is Collected Unlawfully, No Age Verification System Is In Place For Children, GUARANTOR FOR THE PROTECTION OF PERSONAL DATA (Mar. 31, 2023), available at https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847#english.
Supantha Mukherjee, Elvira Pollinaand & Rachel More, Italy’s ChatGPT Ban Attracts EU Privacy Regulators, REUTERS (Apr. 3, 2023), available at https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/.
Open Letter, Pause Giant AI Experiments: An Open Letter, FUTURE OF LIFE INSTITUTE (Mar. 29, 2023), available at https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
COMMISSION ON ARTIFICIAL INTELLIGENCE, COMPETITIVENESS, INCLUSION, AND INNOVATION, U.S. CHAMBER OF COMMERCE 10 (Mar. 9, 2023) (“AI Commission Report”), available at https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Report_v5.pdf.
The Artificial Intelligence Act, FUTURE OF LIFE INSTITUTE, available at https://artificialintelligenceact.eu/.
Gian Volpicelli, ChatGPT broke the EU plan to regulate AI, POLITICO (Mar. 3, 2023), available at https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/.
Press Release, UK unveils world leading approach to innovation in first artificial intelligence white paper to turbocharge growth, GOV.UK (Mar. 29, 2023), available at https://www.gov.uk/government/news/uk-unveils-world-leading-approach-to-innovation-in-first-artificial-intelligence-white-paper-to-turbocharge-growth.
Business Blog, Aiming for Truth, Fairness, and Equity in Your Company’s Use of A.I., FED. TRADE COMM’N (Apr. 19, 2021), available at https://www.ftc.gov/business- guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
Report to Congress, Combatting Online Harms Through Innovation, FED. TRADE COMM’N (Jun. 16, 2022), available at https://www.ftc.gov/reports/combatting-online-harms-through-innovation.
Business Blog, Keep Your AI Claims in Check, FED. TRADE COMM’N (Feb. 27, 2023), available at https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.
Solicitation for Public Comments on the Business Practices of Cloud Computing Providers (FTC-2023-0028-0001), REGULATIONS.GOV (Mar. 22, 2023), available at https://www.regulations.gov/docket/FTC-2023-0028.
2023 Annual Antitrust Enforcers Summit: Welcome And Interviews Of AAG Kanter And FTC Chair Khan, DEP’T OF JUSTICE (Mar. 27, 2023), available at https://www.justice.gov/opa/video/2023- annual-antitrust-enforcers-summit-welcome-and-interviews-aag-kanter-and-ftc-chair.
Compl. ¶ 130, quoting Will Douglas Heaven, GPT-4 is Bigger and Better than ChatGPT – but OpenAI Won’t Say Why, MIT TECHNOLOGY REVIEW (March 14, 2023), available at https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/.
AI Commission Report at 9.
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Sergei Zaslavsky, an O'Melveny partner licensed to practice law in the District of Columbia and Maryland, Michael R. Dreeben, an O'Melveny partner licensed to practice law in the District of Columbia, Stephen McIntyre, an O'Melveny partner licensed to practice law in California, Scott W. Pink, an O'Melveny special counsel licensed to practice law in California and Illinois, Sheya I. Jabouin, an O'Melveny associate licensed to practice law in New York, and Emily Murphy, an O'Melveny associate licensed to practice law in the District of Columbia, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2023 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.