Introduction
The following White Paper is an outline of the role of AI, and the challenges with using it, in the practice of law.
There are implications for BRIO and other technology partners in adopting this cutting-edge technology. Vendors and technology partners that build tools for law firms occupy an important position in this ecosystem. The ethics rules apply to lawyers, but law firm clients will increasingly choose vendors whose products make compliance easier rather than harder.
From the perspective of a law firm, ideal AI-enabled tools will:
- Offer strong privacy and security guarantees, including clear limits on training use of customer data.
- Provide configuration options that let firms restrict use to approved matters, users, and data sources.
- Generate audit trails that show who used AI, for what purpose, and how outputs were incorporated into work product.
- Include built-in warnings or guardrails about hallucinations, missing citations, and potential bias.
- Support role-based access control so that sensitive matters are tightly protected.
- Integrate with existing systems (DMS, case management, billing) in ways that respect least-privilege principles.
- Provide training materials and support targeted specifically at legal ethics and professional responsibility.
By aligning product design with the ethical obligations summarized in this document, vendors can help law firms deploy AI in a way that is both innovative and compliant—and earn trust as long-term partners.
Overview
This summary pulls together the key points from the seminar “Another Year into the Artificial Intelligence Era: Developments in Legal Ethics,” the accompanying slide deck, and multiple bar association reports and ethics opinions on generative AI.
Across jurisdictions, the message is consistent: lawyers may use generative AI, but only if they do so in a way that complies with existing duties of competence, confidentiality, supervision, communication, candor to tribunals, avoidance of frivolous claims, fair billing, and compliance with advertising and unauthorized practice of law rules. AI does not change the ethics rules; it changes how those rules are applied.
What Generative AI Is and How Law Practices Are Using It
“Artificial intelligence” in this context refers to computer systems that perform tasks that typically require human intelligence—reasoning, pattern recognition, language understanding, and prediction. Generative AI (GAI) is a subset that creates new content (text, images, code, audio, video) in response to prompts. Most legal-focused tools today are large language model (LLM) systems that generate text.
In law practice, generative AI tools are already being used to:
- Draft and revise emails, letters, internal memos, briefs, and discovery requests and responses.
- Assist with legal research, including summarizing cases and statutes and suggesting authorities.
- Review and summarize contracts and other large document sets.
- Generate chronologies, issue lists, deposition questions, and trial themes.
- Support compliance, policy drafting, and regulatory monitoring tasks.
- Handle administrative work such as time entry narratives, marketing content, and basic HR documents.
Outside law firms, courts, legal aid organizations, and pro bono programs are experimenting with AI for self-help tools, triage, and guided forms, potentially improving access to justice but also introducing new risks around accuracy, fairness, and transparency.
The Big Themes: What Ethics Authorities Agree On
Although terminology differs, bar reports and ethics opinions are strikingly aligned. The core messages are:
- Generative AI is a powerful tool, not a replacement for lawyers. Human judgment remains indispensable.
- Lawyers must understand enough about the tools they use to evaluate their reliability, limitations, and risks.
- Client confidentiality and privilege must be protected when any client-related information is used with AI tools.
- Lawyers remain fully responsible for any work product generated with AI, including accuracy of facts and law.
- Use of AI must not lead to excessive or deceptive fees, misleading advertising, or unauthorized practice of law.
- Firms should adopt policies, training, and governance structures around AI use and vendor selection.
The rest of this summary unpacks the main ethical duties as they relate to AI and offers practical guidance for lawyers, staff, and law firm leaders.
Duty of Competence and Technological Competence
The duty of competence requires lawyers to possess the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. Modern rules explicitly include a duty to understand the benefits and risks associated with relevant technology.
With respect to generative AI in law practice, competence has several dimensions:
Understanding capabilities and limitations
Lawyers do not need to become AI engineers, but they must reasonably understand what a given tool can and cannot do. That includes recognizing that generative AI:
- Can produce fluent and plausible-sounding text that is nevertheless wrong, incomplete, or out of date.
- May “hallucinate” cases, statutes, and facts, especially in niche or evolving areas of law.
- May reflect biases that exist in its training data (e.g., by gender, race, geography, or socioeconomic status).
- Usually cannot explain, in a transparent way, exactly how it arrived at a particular output.
Competent use requires testing tools, understanding their failure modes, and adjusting how and when they are used.
No “autopilot lawyering”
Across opinions, authorities are clear: overreliance on AI is inconsistent with a lawyer’s duty to exercise independent professional judgment. AI outputs may be a starting point, not a final product. Lawyers must:
- Critically review and edit AI-generated work product.
- Verify legal authorities and quotations using trusted research tools.
- Check facts against the underlying record or reliable sources.
- Assess whether AI suggestions align with the client’s goals, risk tolerance, and broader strategy.
Ongoing education
Because AI tools and legal guidance are rapidly evolving, competence is not a one-time event. Lawyers and firms should build AI into their professional development: CLE programs, internal trainings, pilot projects with structured evaluation, and collaboration with technologists and security professionals.
Confidentiality, Privilege, and Data Protection
The duty of confidentiality extends to all information relating to the representation, whatever its source. Generative AI tools raise distinct risks because many systems store prompts and uploaded documents, use them to further train the model, or share them with third parties.
Key elements of a defensible approach to confidentiality when using AI in law practice include:
- Know the tool: review the provider’s terms of use, privacy policy, data retention practices, and whether the system is “self-learning” on user inputs.
- Avoid public tools for sensitive data: do not paste client names, facts, or documents into public or consumer-grade AI tools that reuse prompts for training or analytics.
- Prefer enterprise or in-house tools: use products that provide contractual assurances about confidentiality, data segregation, security controls, and audit logs.
- Anonymize and minimize: where possible, strip identifying details, use hypotheticals, and share only the minimum information needed for the task.
- Obtain informed consent when appropriate: some authorities recommend, and others strongly suggest, obtaining client consent before sending confidential information to third-party AI providers.
- Coordinate with IT and security: involve security and privacy professionals when choosing or configuring AI tools, especially those integrated with document management or case management systems.
The same analysis applies to attorney–client privilege and work product. If a tool’s terms of use allow the vendor broad rights to access, use, or disclose inputs, that may pose risks to privilege. These issues should be considered when selecting vendors and drafting retainer agreements and outside counsel guidelines.
Supervision, Firm Governance, and AI Policies in Law Practice
Supervisory duties require partners and managers to ensure that lawyers and non-lawyer staff comply with professional obligations. Many authorities treat AI tools as functionally similar to non-lawyer assistants or outsourced providers.
Practical implications for firms include:
- Adopt a written AI use policy that applies to lawyers, staff, contractors, vendors, and affiliates.
- Designate an AI oversight group or committee to vet tools, update policies, and monitor usage and emerging risks.
- Define where AI can and cannot be used (e.g., internal brainstorming vs. drafting filings, client-facing advice, or intake).
- Require human review and approval of AI-assisted work product, especially anything filed with a court or sent to clients or opposing counsel.
- Train all personnel regularly on the policy, including examples of permitted and prohibited uses.
- Set expectations and workflows for checking AI outputs for accuracy, bias, and completeness.
- Build vendor due diligence into procurement, including questions about training data, security, bias testing, and legal-sector experience.
Some bar task forces go further and provide model AI policies and vendor questionnaires that firms can adapt. The common goal is to ensure that AI is used deliberately, with documented controls and accountability.
Communication and Client Consent
The duty to communicate requires keeping clients reasonably informed and explaining matters to the extent reasonably necessary to permit informed decisions. Generative AI implicates this duty in at least three ways:
Explaining AI use where it matters
Where AI is material to how services will be delivered—particularly where confidential information will be transmitted to third-party tools or where AI may significantly affect cost, speed, or approach—many authorities recommend telling clients how AI will be used, its benefits and risks, and any alternatives.
Engagement letter language
Several reports include sample engagement provisions addressing AI use. Common themes include: describing AI as a tool used to assist with research and drafting; confirming that attorneys remain responsible for all work product; acknowledging confidentiality and security measures; and explaining how AI may impact fees and costs.
Managing expectations
Clients may assume that AI will make everything instantaneous and inexpensive. Clear communication about where AI can and cannot be used, and about the need for lawyer review and quality control, is important to avoid disputes over timing, scope, or cost.
Candor to Tribunals, Meritorious Claims, and Litigation Misuses
Recent cases where lawyers submitted briefs with fabricated AI-generated citations have become cautionary tales. Judges have sanctioned lawyers and emphasized that, while technology can be used to assist with research, lawyers remain responsible for verifying that authorities exist and say what counsel claims they say.
Ethics rules on candor to tribunals and meritorious claims require that lawyers:
- Not make false statements of fact or law to a court, nor fail to correct false statements previously made.
- Present only non-frivolous claims, contentions, and arguments.
- Reasonably investigate authorities and facts relied on in pleadings, motions, and briefs.
In the AI context, that means lawyers must personally verify cases, quotations, and records cited in any document drafted with AI assistance. Many courts now require certifications regarding AI use or expressly warn against relying on unverified AI research. Even where no specific rule exists, it is prudent to document how research was conducted and how AI outputs were checked.
Fees, Billing, and the Economics of AI and Law Practice
Ethics rules require that fees and costs be reasonable and communicated to the client. Generative AI intersects with billing in several ways.
What can be billed
Most authorities agree that lawyers may bill for actual, case-specific work performed when using AI—for example, time spent crafting prompts, reviewing and editing AI-generated drafts, and validating authorities and facts.
What cannot be billed
Lawyers generally should not bill clients for time spent learning how to use AI tools at a basic level or for time “saved” by using AI (for example, billing the same number of hours that manual drafting would have taken). Double-billing or padding time because AI is faster is inconsistent with rules on reasonable fees and honesty.
Handling AI-related costs
Firms may pass through reasonable AI-related costs (such as per-seat or per-document charges) if permitted by law and clearly explained in the fee agreement. As AI leads to efficiency gains, firms may need to adjust pricing models and be prepared to explain how technology is used to deliver value.
Advertising, Chatbots, and Direct Client Interaction
Some firms are deploying AI-powered chatbots on websites or using them in marketing, intake, or client portals. Ethics rules on advertising, solicitation, and deception apply fully in these contexts.
Important safeguards include:
- Clear disclosure that users are interacting with an AI tool, not a lawyer or law firm employee.
- Avoiding statements that imply the chatbot is a lawyer or offers individualized legal advice if that is not permitted.
- Ensuring that any content generated by AI on a firm’s website or in marketing materials is accurate, not misleading, and consistent with rules on claims about expertise or results.
- Screening out users who are already represented by counsel or located in jurisdictions where the firm is not licensed, where required.
- Monitoring chatbot interactions and promptly correcting any problematic content or practices.
Ultimately, the firm is responsible for what its AI systems “say” to prospective clients.
Unauthorized Practice of Law, Access to Justice, and AI
AI tools that provide legal information or guidance to the public raise questions about what constitutes the practice of law and when AI-assisted services cross into unauthorized practice. Bar reports generally recognize both sides:
- AI self-help tools can expand access to legal information and help unrepresented individuals understand their options.
- Poorly designed or unregulated tools can mislead users, produce incorrect or biased guidance, or give the impression of an attorney–client relationship where none exists.
Lawyers must avoid assisting in unauthorized practice, including by deploying or endorsing AI tools that market themselves as providing personalized legal advice in jurisdictions where neither the lawyer nor the tool is authorized.
At the same time, regulators and task forces are exploring ways to harness AI for access-to-justice purposes while maintaining safeguards, transparency, and accountability.
Bias, Discrimination, and Fairness
Because AI models learn from historical data, they can replicate or amplify existing societal biases. In the legal context, that risk is acute when AI is used for hiring, client screening, risk assessment, or evaluating cases and settlement strategies.
Firms should:
- Be aware that training data may contain implicit or explicit biases, leading to skewed or unfair outcomes.
- Avoid using AI as the sole or primary decision-maker for employment decisions, client selection, or case valuation.
- Implement procedures to test outputs for disparate impact on protected groups where feasible.
- Train lawyers and staff to recognize biased outputs and to escalate concerns to firm leadership and, where appropriate, to vendors.
Ethical rules prohibiting discrimination and harassment apply equally to AI-enabled practices. Using biased tools without adequate oversight can expose firms to both professional and legal risk.
Law Practice AI Governance and Vendor Selection
Several task force reports emphasize the importance of structured governance frameworks to manage AI risks. Typical elements include:
- An AI use policy that defines scope, roles, and responsibilities.
- An inventory of AI tools in use and their purposes.
- Risk classification of tools (e.g., low, medium, high) based on sensitivity of data and impact on clients.
- Documentation of human review and approval for AI-assisted work product, especially in high-risk contexts.
- Incident response plans for AI-related errors, data breaches, or other failures.
Vendor selection is a key part of governance. Helpful questions to ask AI vendors include:
- What data is used to train the model, and how is new user data handled?
- How does the vendor protect confidentiality, security, and privacy (including breach response and data deletion)?
- What testing has been done on accuracy and bias, especially for legal use cases?
- What audit logs, usage reports, and controls are available to customers?
- Is the tool specifically designed for legal practice, and does the vendor understand professional responsibility obligations?
These questions matter not only for risk management but also for demonstrating to courts, regulators, and clients that the firm is using AI responsibly.
Conclusion
The common theme across bar reports and ethics opinions is not fear of technology, but insistence on preserving core professional values in a new technical environment. AI can help lawyers serve clients more efficiently, expand access to justice, and improve the quality and consistency of legal work. It can also mislead, bias, and erode trust if used without judgment and safeguards.
For lawyers and law firm staff, the path forward is clear: understand the tools, control the risks, keep humans in charge, and let long-standing ethical duties guide how AI is brought into practice. For partners like BRIO, the opportunity is to build and support systems that make it easy for firms to live up to those duties while reaping the benefits of this new generation of technology.