Generative AI productivity tools have become increasingly popular in the legal profession, offering significant potential to enhance efficiency and streamline workflows. Tools like Otter.ai, Zoom AI Companion, and Microsoft Teams AI Note Taker can help summarize remote meetings and transcribe conversations. But, they can also be a privacy disaster for the unwary and uneducated user as an article in this morning’s Washington Post illustrates.
Researcher and engineer Alex Bilzerian said on X last week that, after a Zoom meeting with some venture capital investors, he got an automated email from Otter.ai, a transcription service with an “AI meeting assistant.” The email contained a transcript of the meeting — including the part that happened after Bilzerian logged off, when the investors discussed their firm’s strategic failures and cooked metrics.
Ouch! Unsurprisingly, Bilzerian did not proceed with the deal. The article describes several other examples in which AI shared information intended to be private. That’s a big problem, especially for attorneys who might inadvertently disclose confidential client information. It’s important to note, however, that these tools themselves are not inherently problematic. The responsibility lies with us, as legal professionals, to understand and use them appropriately.
If you’re a lawyer and haven’t yet read the new ABA formal rule 512 on Generative AI in the Practice of Law, then stop what you’re doing and read it. It requires that “lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use.”
That doesn’t apply to me since I’m not planning to use generative AI, you say? Think again. “Even in the absence of an expectation for lawyers to use GAI tools as a matter of course, lawyers should become aware of the GAI tools relevant to their work so that they can make an informed decision, as a matter of professional judgment, whether to avail themselves of these tools or to conduct their work by other means,” notes the rule.
And clients are increasingly expecting attorneys to use gen AI. According to survey conducted by Clio earlier this year, prospective clients are more likely to believe that the benefits of lawyers using AI-powered software for tasks ranging from marketing to legal research to billing far outweigh the costs (32% of prospective clients, compared to 20% of lawyers in small firms and 19% of solo lawyers).
Given these considerations, here are some best practices for legal professionals when using generative AI productivity tools:
- Ethical Alignment: Read ABA formal rule 512 on Generative AI. Reflect on how your use – or non-use – of AI tools aligns with your ethical obligations, particularly those related to client confidentiality and competence.
- Read and Understand Terms: Review and understand the terms of use, privacy policy, and related contractual terms of any generative AI tool you use or consult with someone who knows.
- Understand Your Tools: Familiarize yourself with the features and settings of any AI tools you use, particularly those handling sensitive information.
- Regular Settings Check: Periodically review and update sharing permissions and auto-share settings to align with confidentiality requirements.
- Informed Consent: Always notify meeting participants when using AI assistants for recording or transcribing, and obtain necessary consents.
- Review Before Sharing: Before sharing any AI-generated content, carefully review for errors and sensitive information. Critically consider whether distribution is necessary or appropriate at all, given the potential implications of sharing this information.
- Stay Informed: Keep up with developments in AI technology and related ethical guidelines in the legal profession.
Full disclosure:
I used Claude.AI to help me develop this post. However, it’s important to note that this wasn’t a simple copy-and-paste job. I engaged in a critical review process, carefully examining the AI’s suggestions, engaging in a back-and-forth conversation to refine the ideas, reviewing them for accuracy, and then synthesizing the information through the lens of my professional experience. This approach not only helped in creating this post but also serves as a practical example of how AI tools can be used responsibly in our professional work – with human oversight, critical thinking, and ethical considerations at the forefront.
Hat tip to beSpacific for the link to the Washington Post story.