O’Melveny Artificial Intelligence Lawyers Offer Example AI Policy to Help Companies Mitigate AI Bias Risks

December 3, 2020

With more and more companies employing artificial intelligence in their operations, it is imperative that they recognize the risks of bias in AI, which can result from implicit biases impacting the selection of data to train AI or discriminatory outcomes in the use of AI.

Lawmakers, regulators, and civil activists have begun to focus on AI biases and how they might affect our society. As they do so, they have demanded that businesses be held accountable for their use of AI.

Companies that implement and rely on AI systems need to take proactive measures to avoid bias—whether intentional or not. A best practice is the adoption of corporate AI compliance policies with a view to bias prevention and the proper use of AI.

O’Melveny partner Heather Meeker and associate Amit Itai have developed an example policy that enables a company’s technical, business, and legal decision-makers to leverage AI tools while protecting its values and mitigating legal risks. The example policy can be used as a guide by companies developing their own policies.

View the example policy here.

Meeker and Itai have also explored the topic of bias in AI in the Bloomberg Law articles “Avoiding Human Bias in Artificial Intelligence” and “Bias in Artificial Intelligence: Is Your Bot Bigoted?