On May 20, 2026, a federal court will hear the first major test of whether AI notetakers count as wiretaps under US law. The case is Brewer v. Otter.ai, and it centers on a sales call recorded by a bot the plaintiff never invited and never agreed to. Whatever the verdict, the era of unwritten norms about AI meeting bots is over. Every team needs an AI meeting bot policy in writing, and most do not have one.

The vacuum is wide. 73% of US workers expect their employer to expand workplace surveillance in 2026. Microsoft Teams will start labeling unverified third-party bots "Unverified" in mid-May. Granola just raised $125M at a $1.5B valuation by displacing the legacy bot vendors with botless capture. And the average enterprise now runs more than 305 SaaS apps — most without a single line of governance covering AI in meetings.

This is a how-to guide for writing an AI meeting bot policy that actually holds up in 2026. We will walk through what an AI meeting bot policy needs to cover, give you a copy-paste template, map the state-by-state two-party consent rules, lay out a green/yellow/red meeting tier system, and finish with a 5-step rollout plan. By the end you will have a defensible AI meeting bot policy your legal, IT, and people-ops teams can ship this week.

Why an AI Meeting Bot Policy Is Not Optional in 2026

Three forces converged in the first half of 2026 to make an AI meeting bot policy a non-negotiable artifact for any company running more than a handful of meetings a week.

First, the legal landscape moved. The Brewer v. Otter.ai class action is on the federal docket. Fireflies, Read.ai, and similar vendors face parallel complaints. Eleven US states (California, Florida, Illinois, Maryland, Massachusetts, Montana, Nevada, New Hampshire, Pennsylvania, Washington, plus Connecticut for in-person but one-party for calls) require all-party consent for recording. If even one participant joins from one of those states, the entire call becomes an all-party-consent event. Most teams discover this only when subpoenaed.

Second, platforms started enforcing what policies should have already covered. Microsoft is rolling out an "Unverified" badge for third-party bots in Teams. Zoom AI Companion now extends into Teams and Google Meet, and Google Meet's "Notes for Me" reached 110 million monthly users and is going botless. Bot-based capture is being structurally deprecated, but old habits persist — every meeting still risks an uninvited Otter pop-up.

Third, the cost shifted. ChatGPT is now the most-expensed app in the US. AI-native SaaS spend is up 108% year over year. Notion's Custom Agents went from beta-free to $10 per 1,000 credits in May 2026. Without an AI meeting bot policy, individual employees are quietly expensing AI tools, recording sensitive calls on consumer plans, and creating shadow data trails that finance, security, and legal cannot see.

The teams that have written an AI meeting bot policy in 2026 are not just protecting themselves — they are running better meetings. Atlassian's State of Teams 2026 report puts the Fortune 500's coordination tax at $161 billion a year, with 87% of knowledge workers saying they lack capacity to coordinate. A clean AI meeting bot policy is one of the highest-leverage interventions for that number.

What Every AI Meeting Bot Policy Must Include

An AI meeting bot policy is a one-to-three-page document, not a manifesto. Long policies don't get read. The seven sections below are the minimum for an AI meeting bot policy that survives audit, lawsuit, and the day-to-day reality of how teams actually meet.

Section 1: Scope and Definitions

Define what counts as an "AI meeting bot." This sounds trivial. It is not. The category includes any tool that joins a video meeting (Otter, Fireflies, Read.ai, Sembly, Avoma), any tool that captures meetings without joining (Granola, Google "Notes for Me", Zoom AI Companion), and any browser extension or operating-system-level recorder that touches meeting audio. Your AI meeting bot policy must apply to all three. Most policies cover only category one and miss the rest.

Define participant types: employees, contractors, customers, vendors, regulators. Different categories trigger different consent rules.

Section 2: Consent Requirements by State

This is the heart of an AI meeting bot policy. Eleven US states require all-party consent for recording. If any participant joins from one of those states, every participant must consent. The federal Electronic Communications Privacy Act sets a one-party-consent floor, but state law preempts upward.

The clean rule for an AI meeting bot policy is: "If any participant is in a two-party state, treat the meeting as all-party consent." This is simpler than trying to map who is where in real time. Combine that with a default verbal consent script: "This meeting is being captured by [tool name]. The notes will go to [recipient list]. Object now if you'd prefer not to be recorded." That single sentence resolves 95% of legal risk under any state regime.

Section 3: Approved and Banned Tools

An AI meeting bot policy without a vendor list is a policy in name only. Maintain three explicit lists.

Approved tools — vendors that have signed your data processing agreement, have SOC 2 Type II, do not train models on your meeting data by default, and offer admin controls for retention and deletion. As of May 2026, this list typically includes Microsoft Copilot (with the right tenant settings), Zoom AI Companion (when enterprise data sharing is off), and a small number of native, consent-first vendors.

Conditional tools — vendors allowed for specific use cases or specific teams. These usually include third-party notetakers that pass security review but lack universal trust. Think Granola for sales, Fathom for CS, Read.ai for executive coaching.

Banned tools — vendors with active class actions, known consent failures, or known training-on-customer-data defaults. As of May 2026, this list typically includes specific Otter consumer plans, any unmanaged Fireflies install, and any browser extension that records audio without an explicit join announcement. Update this list quarterly.

Section 4: Meeting Tiers and Bot Policy by Tier

Not every meeting needs the same rules. A clean AI meeting bot policy uses a green/yellow/red tier system.

Green meetings — internal team standups, all-hands, internal retrospectives. AI meeting bots are allowed by default. Notes are auto-shared with the team. No verbal consent required beyond the standing notice in the calendar invite.

Yellow meetings — customer calls, prospect calls, interviews, cross-functional working sessions with sensitive material. AI meeting bots are allowed only with verbal consent at the start. Notes are restricted to attendees plus named distribution list. Recording opt-outs honored on request.

Red meetings — board meetings, legal privilege calls, M&A discussions, employee-relations conversations, customer complaints with regulatory implications, anything covered by HIPAA, GLBA, or attorney-client privilege. AI meeting bots are banned. Period. Manual notes only, by a designated note-taker, on an approved doc system. This single rule prevents 80% of the AI meeting bot policy violations that show up in litigation.

Section 5: Data Handling, Retention, and Deletion

Specify where meeting data lives, how long it is retained, and how it is deleted. The default for an AI meeting bot policy in 2026 should be: 90-day retention for green meetings, 180 days for yellow meetings, no AI capture at all for red meetings. Customer data, ID numbers, health information, and financial data are auto-redacted before notes are stored. Participants can request deletion at any time. The vendor's deletion API actually works — you tested it.

Section 6: Detection and Enforcement

A policy nobody enforces is a wishlist. The detection part of an AI meeting bot policy should specify how IT scans for unauthorized bots: SSO usage logs, browser-extension audits, endpoint visibility tools, and platform-level bot reports (Microsoft Teams, Zoom, and Meet all expose this in admin consoles). Enforcement should specify the consequence ladder: warning, training, manager escalation, formal HR action.

Section 7: Exceptions and Escalation

Every AI meeting bot policy needs a clear exception path. Who approves an exception? How long does approval take? What gets logged? In practice, this should be a single person — usually the data protection officer or the legal team's privacy lead — and the SLA should be 48 hours. Exceptions get logged in the same ticket system that tracks security incidents, so they are auditable a year later.

A Copy-Paste AI Meeting Bot Policy Template

Below is a minimal AI meeting bot policy you can paste into your wiki, adapt in 30 minutes, and ship this week. (Related reading: our AI notetaker etiquette playbook covers the soft norms that complement this hard policy.)

[Company] AI Meeting Bot Policy — v1.0 — Effective [Date]

>

Purpose. This AI meeting bot policy governs the use of any tool that captures, transcribes, summarizes, or analyzes audio or video from internal or external meetings.

>

Scope. Applies to all employees, contractors, and vendors using [Company] systems or accounts.

>

Consent. If any meeting participant is located in a two-party-consent state (CA, FL, IL, MD, MA, MT, NV, NH, PA, WA, plus CT for in-person), the meeting is treated as all-party consent. The host announces capture verbally at the start: "This meeting is being captured by [tool]. Notes go to [list]. Speak up if you'd like to opt out."

>

Approved tools. [List]. Conditional tools. [List]. Banned tools. [List]. Updated quarterly by the [Owner].

>

Meeting tiers. Green: AI capture allowed by default. Yellow: AI capture with verbal consent. Red (board, legal, HR, M&A, regulated): no AI capture, manual notes only.

>

Data handling. 90-day retention for green, 180-day for yellow, none for red. Customer PII, health, and financial data auto-redacted. Deletion-on-request supported and tested.

>

Detection. [IT team] scans for unauthorized bots monthly via SSO logs, extension audits, and platform admin consoles.

>

Enforcement. Warning → training → manager escalation → HR action.

>

Exceptions. Submit via [ticket system]. Approved by [DPO/Legal]. SLA 48 hours.

This is the floor of an AI meeting bot policy, not the ceiling. Every team will adapt it. The point is to ship something, not to draft the perfect document.

How to Roll Out an AI Meeting Bot Policy in 5 Steps

A written AI meeting bot policy is worthless without rollout. Here is the 5-step rollout that has worked for teams that actually got this live in Q2 2026.

Step 1 — Inventory. Pull a list of every AI meeting tool currently in use across the company. Use SSO logs, expense reports (search "Otter," "Fireflies," "Granola," "Read"), and a 5-question employee survey. Most teams discover they have 8-15 different bots in production, including ones nobody approved.

Step 2 — Triage. Sort the inventory into approved / conditional / banned using the criteria from Section 3 of the AI meeting bot policy. For tools that are widely used but should be banned, plan a 30-day grace period rather than a hard cutoff.

Step 3 — Tooling change. Roll out platform-level controls. In Microsoft Teams, enable Copilot Cowork settings and disable third-party bot joins for sensitive meeting types. In Zoom, restrict AI Companion to internal participants only for yellow meetings. In Google Meet, set "Notes for Me" defaults to off for non-organizers. Each platform's admin console is where the AI meeting bot policy meets reality.

Step 4 — Training. A 20-minute video plus a one-page cheat sheet for every employee. Cover the verbal consent script, the green/yellow/red tiers, and the approved tool list. Include the why — the May 20 Otter hearing, the Meta surveillance scandal of April 2026, the workslop research showing 40% of employees received AI-generated busywork from a colleague last month. People follow policy when they understand the stakes.

Step 5 — Quarterly audit. Repeat Step 1 every 90 days. Update banned tools as new lawsuits and breaches surface. Revisit the tier system after every major platform change. The AI meeting bot policy is a living document, not a one-time artifact.

(For broader context on the cost side of AI sprawl, see our deep-dive on the shadow AI bill breaking 2026 SaaS budgets and our analysis of bot bloat in 2026 meetings.)

Five Common AI Meeting Bot Policy Mistakes

Even good drafts get tripped up by the same patterns. Here are the five most common AI meeting bot policy failures we see in 2026.

  1. No vendor list. A policy with abstract principles but no approved/banned tool list will not survive contact with employees.
  2. State-by-state spaghetti. Trying to map participants to consent rules in real time fails. Default to all-party consent if any two-party state participant is on the call.
  3. No detection mechanism. A policy with no audit path is theater. Detection has to be automated.
  4. Treating recording and AI capture as the same thing. Botless capture (Granola, Notes for Me, Copilot summary) is not a recording in the legal sense in some states, but is in others. The AI meeting bot policy must call out both.
  5. Forgetting the consent script. The single most effective control in any AI meeting bot policy is one verbal sentence at the start of every yellow or red meeting.

The Coommit Lane in This Conversation

The structural problem behind every AI meeting bot policy in 2026 is that meetings, capture, and follow-up live in three different products. That is why third-party bots exist in the first place — to bridge a gap that should not be there. Coommit closes the gap by putting video, an interactive canvas, and contextual AI in one workspace, with consent built in by default rather than retrofitted. When the meeting and the capture are the same product, the AI meeting bot policy gets shorter, the legal exposure gets smaller, and the meeting actually produces an artifact people use.

That is not a fix for everyone — but it is the direction the May 2026 platform moves are pointing toward. Native, consent-first, single-surface AI is replacing bolt-on bots faster than most policies can keep up with.