False Claims Act Enforcement Risks for Companies Using AI
March 13, 2026
Introduction
As artificial intelligence (AI) is adopted across industries, companies are tracking the evolving patchwork of AI-specific regulations and guidance. But compliance, risk, and legal teams should also consider generally applicable enforcement laws that govern all company conduct—including AI use. The False Claims Act (FCA) will undoubtedly be a key component of federal enforcement efforts against users and developers of AI. The FCA is a quasi-penal statute that punishes the knowing submission of false claims for payment to the federal government, and it applies not only to companies that do business directly with the government, but also to subcontractors and others who conduct business in connection with government programs.
Companies that do business with the government—or sell products to those that do—should expect increased FCA enforcement focus on AI. The paucity of AI-specific federal regulations are likely to fuel this trend. DOJ and relators (private whistleblowers empowered and financially incentivized to bring FCA claims on behalf of the United States) have significant flexibility to assert their own theories on what constitutes appropriate AI use and what crosses the line into fraud.
Key Takeaways
- Minimal Federal Regulation Creates a Gap for Increased Enforcement: The federal government has provided limited guidance on compliance requirements for AI. For example, although the National Institute of Standards and Technology (NIST) has issued governance and risk management principles related to AI, that guidance is high-level and does not get into the specifics of what actual compliance should look like for different companies. This lack of clarity creates space for FCA plaintiffs to argue that use was inappropriate and led to the submission of false claims.
- FCA Plaintiffs Are Likely to Follow Existing Playbooks with an AI Spin: We can expect that DOJ and relators will follow existing FCA frameworks based on longstanding, non-AI regulations—submission of false payment data to federal health programs, failure to deliver products that meet the contracted specifications, violation of cybersecurity requirements, and the like—and apply them to the use and sale of AI products.
- Healthcare Tools May Be an Early Enforcement Target: AI systems used in electronic medical records or for coding could trigger FCA enforcement risk if they influence diagnoses or billing submissions to federal healthcare programs.
- Cybersecurity Failures Can Lead to FCA Exposure: Contractors that process government data through AI systems without appropriate security controls may face FCA exposure for falsely certifying compliance with cybersecurity requirements.
- Consistent AI Governance and Oversight Can Help Mitigate Potential Exposure: Organizations should implement robust AI governance and monitoring processes to ensure that AI is implemented consistent with available guidance and with an eye toward potential FCA exposure.
The AI Regulatory Gap & FCA Litigation
Although AI is increasingly ubiquitous, the regulatory landscape in the United States is notably fractured. There is no comprehensive federal AI regulatory scheme. President Trump has issued four executive orders focused on AI,1 which generally emphasize the importance of deregulation, both at the state and federal level, to foster innovation.
Federal agencies have echoed this emphasis on deregulation. For example, the Chair of the Federal Trade Commission, Andrew Ferguson, stressed “making sure that government regulators don’t intervene too early.”2 The Centers for Medicare & Medicaid Services (CMS) has acknowledged the use of AI by players in the healthcare industry but has not issued comprehensive guidance, instead noting that AI should comply with preexisting requirements.3
But the lack of AI-specific regulation is unlikely to inhibit federal enforcement actions against the developers and users of AI under existing legal frameworks.4 Because the FCA is one of the federal government’s most potent enforcement mechanisms, it is likely to become one of the primary tools for federal enforcement relating to the use of AI—particularly given the administration has already signaled a willingness to use the FCA as a policy tool. This trend will only be compounded by the private relators’ bar and their financially-motivated efforts to identify new cases. And as AI-specific regulations and requirements for federal contractors materialize, the lack of clear compliance precedents and guidance will lead to additional creative FCA theories.
Areas of Potential FCA Enforcement Action
With limited AI-specific regulations at the federal level DOJ and relators are likely to rely on existing FCA playbooks to bring cases involving the use of AI. For example, a company that certifies compliance with contractual or regulatory requirements as a condition of payment could face FCA exposure if its AI systems fail to meet those standards. Similarly, claims for payment generated or influenced by AI tools that produce inaccurate data could be characterized as false under traditional FCA frameworks. This can occur if an AI system is trained on inaccurate or erroneous data, and is also a risk where an AI system learns from historical human inputs that could be alleged as flawed in some way. The absence of clear federal guidance outlining appropriate compliance measures does not insulate companies from liability; instead, it creates uncertainty that DOJ and relators can exploit by arguing that AI use was inconsistent with existing legal obligations. Companies should anticipate that familiar FCA theories will be adapted to target AI use cases across industries, and should proactively assess how their AI systems interact with government contracts, certifications, and payment submissions. The below examples illustrate how these enforcement theories may play out in different contexts:
Healthcare: Electronic Medical Records
Given the sums of money at stake, healthcare has been and will undoubtedly continue to be the focus of FCA litigation. Health records are the vehicle for many new AI technologies that assist with documentation. Because information from health records (diagnosis codes, procedure codes, etc.) is submitted to the federal government to determine payments, these new AI health record technologies will likely be a focus of FCA enforcement in healthcare. Just last year, the newly-formed DOJ-HHS False Claims Act Working Group stated that one of its six priority enforcement areas was “[m]anipulation of Electronic Health Records systems to drive inappropriate utilization of Medicare covered products and services.”5 HHS-OIG has similarly stated that “querying physicians via electronic medical record platforms (including prompts generated by artificial intelligence algorithms)” is “potentially abusive and fraudulent conduct.”6
Federal enforcers have already advanced the argument that AI-generated “prompts” and “queries” effectively usurp physician judgment, leading to the submission of unsupported diagnoses.7 As providers and health plans increasingly rely on AI tools to identify diagnoses and supplement medical record documentation, these systems are directly influencing claims submitted to the government. Companies should expect DOJ and relators to argue that AI-assisted coding resulted in false submissions—including arguments that the AI output was accepted without meaningful physician review.
Government Procurement & Third Party Liability: AI Developers & Vendors
The federal government is a significant purchaser of AI software and products. Federal agencies across the executive branch are increasingly procuring AI-powered tools for a wide range of functions, including data analysis, fraud detection, benefits administration, national security, and administrative decision-making. Federal spending on AI is approaching $10 billion and that number is likely to only increase.8 While the majority of these purchases are governed by the Federal Acquisition Regulation (FAR) and agency-specific procurement rules, existing frameworks have not provided uniform guidance on AI. This lack of guidance combined with the rapid growth and evolving nature of AI tools used and supplied by federal contractors creates compliance uncertainties that could exacerbate FCA exposure.
Companies that develop and sell AI products and software to the federal government, as well as those that deploy AI in executing government contracts, face FCA risk if those tools do not perform as represented. Representations about an AI system’s capabilities, accuracy, or limitations—whether made during the procurement process or in ongoing certifications—could form the basis of an FCA claim. The federal government has already brought non-FCA enforcement actions against companies that made unsubstantiated or overstated claims about their AI products’ capabilities9, and plaintiffs’ counsel are already testing novel class action theories challenging alleged misuse of AI. While those actions did not proceed under the FCA, they signal the potential for increased scrutiny of AI-related representations that could readily translate to FCA theories.
AI developers who do not sell their products to the government are not immune from these trends. FCA exposure extends beyond companies that contract directly with the government. Because the FCA imposes liability on third parties who “cause” false claims to be submitted, AI developers and vendors that sell products to government contractors and know their products are subject to government requirements can face exposure if use of those products results in allegedly false claims—even without a direct contractual relationship with the government.
Cybersecurity: Government Data on Unsecured AI Platforms
Companies that use AI systems to process, store, or transmit government data face significant FCA exposure if those systems are not adequately secured. DOJ has increasingly pursued cybersecurity-related FCA cases under the Civil Cyber-Fraud Initiative, which targets government contractors and grant recipients that knowingly fail to comply with cybersecurity requirements. AI systems present unique cybersecurity vulnerabilities: they often require access to large datasets, may transmit data to third-party cloud environments for processing, and can be exploited through adversarial attacks that manipulate model inputs or outputs. If a government contractor deploys an AI tool without implementing appropriate security controls, the contractor could face FCA exposure for falsely certifying compliance with applicable cybersecurity standards—whether or not the security issues actually resulted in a data breach.
Best Practices to Avoid AI-Related FCA Exposure
Government contractors that use or sell AI products or software, or companies that provide AI tools to government contractors, should consult with counsel to develop comprehensive AI governance procedures that consider the FCA landscape. Given the likelihood of AI-related enforcement, companies should take an active role in developing robust and consistent AI governance programs that emphasize pre-deployment and ongoing testing, auditing, and validation of AI outputs. These governance programs should be designed with the risks of FCA exposure in mind. And all companies should carefully consider how their products compare to the representations made about them.
Increased AI adoption, aggressive DOJ enforcement priorities and the FCA’s per-claim penalty structure creates a risk environment that companies cannot afford to ignore. DOJ has already demonstrated its eagerness to repurpose existing FCA theories to new technologies, and relators are likely to follow. As companies continue to evaluate and adopt AI systems, they should assess their AI-related FCA exposure, implement appropriate governance and oversight mechanisms, and ensure that their compliance programs are equipped to evaluate new technologies. Our team includes experienced FCA litigators who have already defended against developing versions of these theories, as well as subject-matter experts in AI legal standards, including emerging regulatory requirements governing AI use.
1 Ensuring a National Policy Framework for AI, Exec. Order No. 14365, (Dec. 11, 2025); Preventing Woke AI in the Fed. Gov’t., Exec. Order No. 14319 (July 23, 2025); Accelerating Fed. Permitting of Data Center Infrastructure, Exec. Order No. 14318 (July 23, 2025); Promoting the Export of the American AI Technology Stack, Exec. Order No. 14320, (July 23, 2025).
2 FTC Commissioner Outlines FTC Priorities, The Legal Wire (May 15, 2025), https://thelegalwire.ai/ftc-commissioner-outlines-ftc-priorities/
3 Centers for Medicare & Medicaid Services, Frequently Asked Questions: Coverage Criteria and Utilization Management Requirements in CMS Final Rule (CMS-4201-F) (Feb. 6, 2024), https://www.ahcancal.org/Reimbursement/Medicare/Documents/CMS%20Memo%20FAQ%20on%202024%20MA%20Final%20Rule%202.6.24.pdf.
4 Press Release, Federal Trade Commission, FTC Order Requires Online Marketer to Pay $1 Million for Deceptive Claims that Its AI Product Could Make Websites Compliant with Accessibility Guidelines (Jan. 3, 2025), https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-order-requires-online-marketer-pay-1-million-deceptive-claims-its-ai-product-could-make-websites; Press Release No. 2024-36, U.S. Securities and Exchange Commission, SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (Mar. 18, 2024), https://www.sec.gov/newsroom/press-releases/2024-36.
5 Press Release, U.S. Department of Health and Human Services, DOJ-HHS False Claims Act Working Group (July 2, 2025), https://www.hhs.gov/press-room/hhs-doj-false-claims-act-working-group.html.
6 Office of Inspector General, U.S. Dep’t of Health & Hum. Servs., Medicare Advantage Industry Segment-Specific Compliance Program Guidance 20 (Feb. 3, 2026), https://oig.hhs.gov/documents/compliance/11464/ma-icpg.pdf.
7 United States ex rel. Ormsby v. Sutter Health, 444 F. Supp. 3d 1010, 1030 (Mar. 16, 2020) (“Using data mining, Sutter and PAMF ‘pushed’ their physicians . . . to find and refresh especially high-paying risk-adjusting diagnosis codes to increase patients’ risk scores. Similarly, PAMF physicians received “queries” in the electronic medical record from coders reminding the physicians to ensure that all such diagnosis codes were captured.”)
8 Artificial Intelligence Market Profile, Bloomberg Government, https://about.bgov.com/insights/government-contracting/ai-market-profile/
9 Press Release, Federal Trade Commission, FTC Sues to Stop Air AI from Using Deceptive Claims About Business Growth, Earnings Potential, and Refund Guarantees (Aug. 25, 2025), https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-sues-stop-air-ai-using-deceptive-claims-about-business-growth-earnings-potential-refund.
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Amanda M. Santella, an O’Melveny partner licensed to practice law in the District of Columbia and Maryland; Elizabeth M. Bock, an O’Melveny partner licensed to practice law in California; Anton Metlitsky, an O'Melveny counsel licensed to practice law in New York and the District of Columbia; Reema Shah, an O’Melveny partner licensed to practice law in New York; Elizabeth Arias, an O’Melveny partner licensed to practice law in California; Gillian Mak, an O'Melveny associate licensed to practice law in the District of Columbia; and Brandon Hernandez, an O’Melveny associate licensed to practice law in Maryland, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2026 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, 1301 Avenue of the Americas, Suite 1700, New York, NY, 10019, T: +1 212 326 2000.