alerts & publications
Algorithmic Bias? Proposed California Legislation Targets Use of Algorithms by Financial Service Business and Digital and Software-Based CompaniesDecember 23, 2020
Find more alerts and insights for emerging and tech companies at omm.com/momentum.
New legislation proposed in California would make the state a pioneer in the emerging field of regulating artificial intelligence.1 On December 7, 2020, California Assembly Member Ed Chau introduced AB 13, the Automated Decision Systems Accountability Act of 2021,2 which aims to end algorithmic bias against groups protected by federal and state anti-discrimination laws.
The draft legislation would be enforced by the new Department of Financial Protection and Innovation, formerly known as the Department of Business Oversight. The state agency is undergoing a major expansion as part of a new consumer financial protection effort led by California Governor Gavin Newsom.
Though this is not the first bill targeting algorithmic bias, other attempts have so far failed to move forward or lack the reach and potential impact of Chau’s bill.3 Federal lawmakers in April 2019 introduced companion Senate and House bills for the Algorithmic Accountability Act, which aims “to address the use of biased or discriminatory algorithmic decisions impacting American consumers.”4 Just over one month later, New Jersey lawmakers introduced similar legislation that would have required covered entities to conduct data protection impact assessments on their “high-risk information systems.”5 But neither the federal nor New Jersey proposed legislation made it out of committee.
AB 13 would require California businesses that use an “automated decision system” (ADS) to “take affirmative steps to ensure that there are processes in place to continually test for biases during the development and usage of the ADS.”6 An “automated decision system” is any computational process, including one derived from machine learning, statistics, or other data-processing or artificial-intelligence techniques, that makes a decision or facilitates human decision making that impacts persons.7 The law would apply to any business—defined in the bill as a digital or software company that creates or distributes an ADS—with a focus on financial institutions. By January 1, 2023, every regulated business would have to submit an annual report to the California Department of Financial Protection and Innovation (DFPI) providing specified information about its ADS impact assessment. The impact assessment must include, among other things, (1) the name, vendor, and version of the ADS and a description of its general capabilities; (2) the type or types of data inputs that the ADS uses; (3) a description of the purpose of the ADS; and (4) a clear use and data-management policy spelling out, for instance, what information is available to consumers and the extent to which consumers can access, correct, or object to the ADS’s results. Businesses that fail to comply with the reporting requirement would be subject to civil penalties. The bill, in its current form, does not provide for a private right of action.
Regardless of whether this legislation is passed, companies using artificial intelligence, machine learning, or similar algorithmic systems should be proactive about identifying and eliminating algorithmic bias. Companies should consider adopting a policy for use of algorithmic tools and perform proper diligence when procuring any such tool from vendors. For more on the background of this larger legal issue, see our article and an example policy here.
Given that risk and lawmakers’ increasing interest in regulating AI, companies should take the time now to understand how their AI systems operate and the potential biases associated with them. AB 13 may be heard in committee as early as January 7, 2021.
1 See Yoon Chae, “U.S. AI Regulation Guide: Legislative Overview and Practical Considerations,” The Journal of Robotics, Artificial Intelligence & Law (Jan.-Feb. 2020) (discussing increase in number of bills containing the term “artificial intelligence” from two bills in 114th Congress to 51 bills in the 116th Congress; similarly noting California’s increase from zero bills on “artificial intelligence” in 2015-2016 to 13 bills in the current legislature), available at https://www.bakermckenzie.com/-/media/files/people/chae-yoon/rail-us-ai-regulation-guide.pdf
2 AB 13, 2021-2022 State Assemb., Reg. Sess. (Dec. 7, 2020), available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202120220AB13.
3 In 2018, New York City enacted the nation’s first algorithmic accountability law, which regulates New York City agencies’ use of algorithms by creating a task force to oversee the government’s use of algorithms, examine how error and bias enter into their design, and recommend measures that ensure accuracy and fairness. See Press Release, City Council Passes First Bill in Nation to Address Transparency, Bias in Government Use of Algorithms (Dec. 11, 2017), available at https://www.nyclu.org/en/press-releases/city-council-passes-first-bill-nation-address-transparency-bias-government-use.
4 Algorithmic Accountability Act of 2019, S. 1108, H.R. 2231, 116th Cong. (2019); 165 Cong. Rec. S2389-90 (daily ed. Apr. 10, 2019) (statement of Ron Wyden).
5 New Jersey Algorithmic Accountability Act, AB 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).
6 Id. at § 2.
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Melody Drummond Hansen, an O’Melveny partner licensed to practice law in California, the District of Columbia, and Illinois, Elizabeth L. McKeen, an O’Melveny partner licensed to practice law in California, Heather J. Meeker, an O’Melveny partner licensed to practice law in California, Eric Sibbitt, an O’Melveny partner licensed to practice law in California and New York, Daniel R. Suvor, an O’Melveny partner licensed to practice law in California, and Ryan Murguía an O’Melveny counsel licensed to practice law in California, New York, and Texas, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2020 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.
Thank you for your interest. Before you communicate with one of our attorneys, please note: Any comments our attorneys share with you are general information and not legal advice. No attorney-client relationship will exist between you or your business and O’Melveny or any of its attorneys unless conflicts have been cleared, our management has given its approval, and an engagement letter has been signed. Meanwhile, you agree: we have no duty to advise you or provide you with legal assistance; you will not divulge any confidences or send any confidential or sensitive information to our attorneys (we are not in a position to keep it confidential and might be required to convey it to our clients); and, you may not use this contact to attempt to disqualify O’Melveny from representing other clients adverse to you or your business. By clicking "accept" you acknowledge receipt and agree to all of the terms of this paragraph and our Disclaimer.