By: Laura T. Geyer, Of Counsel, and Andrew J. Costa, Associate

Whether we like it or not, AI is quickly becoming less of a tech novelty and more an unavoidable part of daily life. While it may be tempting to defer taking action due to the uncertainty around AI and the legal landscape around it – or, even more ostrich-like, assume one’s business is immune from the proliferating uses of AI – this approach becomes a worse idea every day. Businesses should adopt a proactive approach to AI now that takes into account the emerging consensus on values around the use of AI, such as the G7’s Guiding Principles for Advanced AI and President Biden’s Executive Order on AI. Indeed, even absent a comprehensive regulatory structure in the US (like the EU’s new AI Act), agencies like the FTC are already taking an aggressive approach to protecting consumers; and other agencies and entities can be expected to follow in short order. In this ecosystem, it is imperative for businesses to take initiative on AI, even if they don’t think AI affects them – it almost certainly does.

Companies that are not proactive in examining their current AI practices and developing AI policies could find themselves the target of various sorts of enforcement or – perhaps worse – the subject of national or even international scandal. Even lawyers are not immune — after all, you do not want to be the lawyer caught citing fake cases in a brief or unwittingly misusing consumer data! In short, it is time for businesses of all kinds (including law firms!) to invest in developing comprehensive AI governance programs so they are not blindsided by hidden AI risks both within and without their organizations.

While the best AI governance programs should be tailored to the unique needs of each business, below are some key considerations that apply to all businesses. Over the coming months we will be officially launching our own suite of AI governance program services to help our clients better understand and mitigate the risks of AI, but this is a sneak peek at the elements these programs should include.

Codify Your AI Governance Policies

The overwhelming consensus is that businesses should develop corporate policies around AI that adhere to emerging values of transparency, equity, and consumer protection; but what is equally important is codifying those policies, so employees, customers and vendors know you have them. It is not only the right thing to do, but it is also the smart thing to do – if for no reason more than to be able to say you have an AI policy and can present it to customers, vendors (or the media) if they ask. AI governance policies should therefore be concrete and transparent to all stakeholders, including employees, contractors, and vendors. Once drafted, at a minimum, the AI governance policy should be shared with these stakeholders and may even be included in employment manuals and agreements. Employees should also be trained on the policy. While having an AI governance policy may not necessarily protect you from regulatory oversight, at the very least it evidences a good faith effort to responsibly address AI in your business (and, when coupled with appropriate training, may help prevent adverse results).

Be Introspective About AI by Analyzing Current Usage and Implementing “Open Door” Policies

It cannot be overstated that businesses need to “know what they don’t know” when it comes to AI. AI has taken us all by storm, and it is likely that businesses do not know the extent to which employees, vendors, and contractors are leveraging AI in doing their work. Businesses need to understand not only how these parties are using AI, but they also need to begin building competency and institutional knowledge as it relates to AI so they can evaluate how it affects their business and how they can design process and procedures around it. For example, “AI tools” (like ChatGPT) are distinct from “tools that use AI” (like Adobe’s Photoshop). (If you don’t know why, feel free to reach out to one of us to discuss!) Your employees may be using both, each with its own set of risks that must be analyzed; but you can’t begin this analysis until you understand what AI services your employees or vendors are using (or are being tempted to use). Drawing back this curtain begins with a conversation with relevant parties about their use of AI. A good first step is to investigate all ways the businesses employees, vendors, and contractors are currently using AI. And this process is not as overwhelming as it sounds. It can begin simply with a conversation with relevant parties about AI so you can understand what tools they’re using and begin assessing risks. Businesses also must develop a culture of transparency around AI use (rather than a culture of fear). An “Open Door” policy as to AI may also help uncover less obvious uses of AI, especially after the initial use investigation is completed.

Be Mindful of How You Use Data and Licensed Material in AI Tools

Most companies have troves of valuable data, whether this be customer, internal, or other kinds of data. Furthermore, some businesses may hold licenses to other sorts of material, such as copyrighted works. But having the legal right to use certain data does not necessarily mean you have a right to use that data for any purpose, or to use copyrighted material outside the bounds of a license – this particularly includes use as an input in an AI or as training data for an AI model. This gets even more complex when the data at issue is customer data (or even biometrics). Businesses should be extremely careful (and seek legal counsel) before using consumer and employee data with AI. For example, some businesses may be tempted to retroactively modify “Terms of Service” or agreements with respect to data use – but this may not always be sufficient to protect you from liability for misuse of data. Any AI governance program should involve a thorough legal review of existing contractual obligations and address the intersection between AI and data use and privacy in contracts as well as the business’ AI governance policy.

Vet Every AI Tool Used

Once current AI uses are uncovered and moving forward as new AI tools are considered, businesses need to scrutinize each AI tool they use (and, ideally, before permitting its use for business purposes). This includes thoroughly reviewing each AI tool’s terms and conditions and understanding distinctions among them. For example, some AI tools offer different terms and conditions for individual users and enterprise-wide AI solutions. ChatGPT, for instance, offers different layers of privacy and security across differing license versions of the AI. If your audit reveals that employees are using ChatGPT, they’re probably using individual direct internet connected open versions that are not accessed through an API – and thus present more risk. An effective AI governance program should establish processes for a business to approve each tool their employees, vendors or others use to perform work.

Don’t Rely on AI Providers’ Indemnity Clauses as Blanket Protection from Liability

While it may be tempting to think that companies like Microsoft, Adobe and OpenAI will indemnify you for “innocent” misuses of AI (such as copyright infringement), this is generally a very bad idea. Instead, businesses (and individuals) should assume that they will be directly liable for any misuse of AI and develop mitigation plans that address such use as part of their AI governance strategy. It’s important to recognize that indemnity with respect to generative AI usually means protecting the user from copyright infringement claims arising from the AI’s use of copyrighted training material to generate the output, and not copyright infringement claims that could arise from downstream commercial use of that output. In many instances, these companies further require users to demonstrate compliance with their other terms, conditions, or service guidelines before they will even provide this degree of indemnity. In other cases, commercial use is strictly prohibited. Businesses therefore need to think carefully about their exposure when using AI, and make sure their governance plans provide enough independent protection in the event they are directly liable for misuse of AI.

These are only a few things businesses should consider when developing an AI governance program. Aside from business-specific considerations, other measures like employee training on the types of AI, the risks each type presents, and best practices for mitigating the risk should also be considered. Importantly, given the fast pace of technical and legal developments in AI, once an AI governance program is put into place, it needs to be revisited regularly, to be sure it is consistent with the evolving landscape. To be sure, AI presents exciting opportunities for businesses, in addition to the substantial risks. A good AI governance program will help you leverage AI to your business’ fullest potential, while mitigating the risks that come along with it.