On March 25, 2026, Google Meet flipped a switch that broke half the AI notetaker industry. A new "Potential Risk" bot policy started refusing unknown third-party bots by default, instantly locking Otter, Fireflies, and Fathom out of millions of meetings. Three weeks later, on April 17, Microsoft Teams rolled out external notetaker-bot detection for Copilot customers. And in between, Granola raised $125 million at a $1.5B valuation on one thesis: the meeting bot is dead.

If you're an IT admin, compliance lead, or security-minded team lead, the question stopped being "should we allow AI notetakers?" and became "how do we block AI notetakers that join without consent — across every platform our people use?" Because the platforms are racing ahead, but your default configurations are almost certainly still wide open.

This listicle is the 2026 admin playbook. Nine concrete controls, platform by platform, to block AI notetakers at the organization, tenant, endpoint, and policy layers. You'll leave with a checklist you can run this week, and a reference model you can hand to your security review team.

We'll cover Zoom, Google Meet, Microsoft Teams, calendar-level filters, OAuth audits, and the one architectural choice that makes most of this unnecessary. Let's get into it.

Why You Need to Block AI Notetakers in 2026

Before the tactics, the context. The urgency is real, and it's not hype.

Unsanctioned AI notetakers are a triple liability. First, consent — 13 US states now require all-party consent for recording, and AI bots joining without disclosure expose your company to wiretap claims. Second, confidentiality — notetaker vendors process transcripts on their own infrastructure, often training models on that data, which creates a shadow data-exfiltration channel. Third, trust — a Fortune investigation in February 2026 documented cases where AI summaries hallucinated action items assigned to employees who weren't even in the room, creating real HR nightmares. One Zoom community user reported a guest's notetaker bot silently taking over their host recording — wiping the session entirely.

The cost is showing up on dashboards too. Zylo's 2026 SaaS Management Index found average SaaS spend of $11,530 per employee, with AI-native app spend growing 393% year-over-year in large orgs. Notetakers are a meaningful slice of that bloat. Blocking the unauthorized ones isn't just a security move — it's a procurement move.

Every control below answers the same question: how do I block AI notetakers that neither I nor the meeting host explicitly approved? Here are the nine controls that matter right now.

1. Write a One-Page "No Bots Without Consent" Policy First

The tempting move is to dive into platform settings. Don't. Every technical control below will be argued, circumvented, or quietly disabled unless there's a written policy that tells your people what the rule actually is.

The policy needs to fit on one page and answer four questions. Who can bring an AI notetaker to a meeting? Which tools are approved? What does disclosure look like? What are the consequences of bringing an unapproved bot?

What a good policy includes

Your legal team can own the drafting. But the policy needs to live somewhere everyone sees it — not buried in a wiki. This foundation is what makes every other control on this list defensible.

2. Turn On Google Meet's "Potential Risk" Bot Policy

Google Meet shipped the single most important bot-blocking feature of 2026 on March 25. If you're a Workspace admin and you haven't touched it yet, this is item one for Monday morning.

The "Potential Risk" setting — enabled by default for new tenants, opt-in for existing ones — puts any unrecognized third-party participant into a denial list. When an Otter, Fireflies, or Fathom bot tries to join, Meet flags it as a risk and requires explicit host approval. If the host doesn't admit it, the bot is kicked.

How to configure it

Open the Google Workspace Admin console, navigate to Apps → Google Workspace → Google Meet → Meet video settings, and enable "Potential Risk restriction." Set it organization-wide. Then publish a one-line internal note telling hosts what to expect: unknown bots will land in a waiting state, and they should only admit ones on the approved list from Control #1.

This single toggle lets you block AI notetakers across every Meet call in your org without touching a single user's device. Read.AI scrambled to build a native Google Meet Media API integration specifically because this policy broke their old bot-based architecture. Use the block first, then decide which native integrations you want to allow.

3. Activate Microsoft Teams External Bot Detection

On April 17, 2026, Microsoft Teams followed Google's lead. Copilot-tier tenants now get automatic detection of external transcription and notetaker bots, with admin controls to block or quarantine them.

In the Teams admin center, navigate to Meetings → Meeting policies, select your global policy, and under "External participants," disable "Allow third-party meeting apps." Then under "Recording & transcription," restrict AI transcription to first-party (Copilot) only. If a user needs a different tool, require a formal exception request.

What to watch for

Combined with Control #2, these two toggles let you block AI notetakers across the two platforms your organization almost certainly uses the most.

4. Lock Down Zoom With Waiting Rooms, App Approvals, and Bot Kick Rules

Zoom is where things get messier. Unlike Google and Microsoft, Zoom hasn't shipped a blanket bot-blocking policy, which means the burden falls on admins to configure multiple settings in concert.

In the Zoom admin portal, start with three defaults:

Zoom Apps Marketplace controls

This is the lever most admins forget. Zoom's Apps Marketplace lets users install third-party integrations tenant-wide, and many notetakers rely on marketplace apps for auto-join behavior. In Advanced → App Marketplace, flip the global setting to "Allow only pre-approved apps," then maintain a short approved list. This is how you block AI notetakers that piggyback on marketplace permissions without having to manually hunt each one down.

Finally, publish a "bot eject" rule for hosts: if a participant named after a notetaker service shows up in a waiting room and isn't on the approved list, kick them and start the meeting. Hosts don't need to be security experts — they just need a clear default. For a deeper look at meeting hygiene and trust, our piece on the AI meeting recording trust crisis covers the broader pattern.

5. Filter Notetaker Bot Emails at the Calendar Layer

Most AI notetakers work by attaching themselves to calendar invites. Otter requests calendar access, scans your events, and auto-joins any meeting it finds. Fireflies does the same. Block the calendar path and you block the bot — cleanly, without touching meeting platforms.

In Google Workspace, go to Gmail → Routing → Compliance → Content compliance, and create an inbound rule that quarantines calendar invites from known notetaker domains. A starter list: `@otter.ai`, `@fireflies.ai`, `@fathom.video`, `@read.ai`, `@grain.com`, `@tl;dv.io`, `@sembly.ai`. Update it quarterly as new vendors emerge.

In Microsoft 365, the equivalent lives in Exchange admin center → Mail flow → Rules. Create a rule that blocks or quarantines calendar invitations from the same domain list.

The edge case nobody covers

Users who manually attach notetaker bots via the vendor's dashboard — for example, by giving Otter OAuth access to their calendar — will still sneak bots in. Control #7 handles that. But the calendar filter catches the majority of auto-join behavior, and it costs nothing.

6. Require SSO and Managed Devices for Every Meeting Participant

If you want one structural control that blocks 80% of the bot problem, it's this: require single sign-on and managed-device enrollment for every meeting participant, including external guests invited into your tenant.

Bots can't authenticate against your SSO provider. They can't produce a valid MDM posture attestation. The moment you make these two requirements non-negotiable, the entire class of "bot joins via shared meeting link" disappears.

How to roll this out without breaking external meetings

The objection is always the same: what about customers, vendors, and job candidates who can't satisfy your SSO? Two options:

Yes, this is more friction for a subset of meetings. That's the point. The friction is what forces bots into a surface where they can't operate. If you want a reference for how SSO and guest models interact with modern work patterns, our shadow AI policy template covers the organizational side.

7. Audit OAuth Scopes Quarterly and Revoke Dormant Notetakers

Here's the inconvenient truth every admin learns the hard way: the bot isn't always the problem. The OAuth grant is.

When a user clicks "Allow Otter to access your calendar," they're giving a third-party service a persistent token to read their events, join meetings on their behalf, and often access contact lists too. Even if you block every bot at the platform layer, an active OAuth grant lets the notetaker sync meetings directly through the user's authenticated session.

In Google Workspace, open Admin console → Security → API controls → Third-party app access, and filter for any app with Calendar or Meet scopes. In Microsoft 365, go to Entra admin center → Applications → Enterprise applications, and sort by permissions. Revoke access for any notetaker on your blocked list, then repeat the audit quarterly.

What to look for

A once-a-quarter OAuth sweep is one of the highest-leverage controls on this list. It takes one admin about 90 minutes and kills the persistent access that calendar filters alone can't touch.

8. Use a Bot-Free Platform for Sensitive Meetings by Default

The seven controls above harden your existing stack. Control #8 is architectural: default sensitive meetings to a platform that doesn't require external bots at all.

This is the thesis Granola, Read.AI's pivot, and several newer entrants are betting on. The AI lives natively inside the meeting product, with audio and canvas context, instead of a third-party bot sitting in the participant list siphoning a transcript out to someone else's cloud. If the AI is part of the platform — tied to your SSO, subject to your DLP, governed by your DPA — you don't need to block AI notetakers, because there aren't any external ones joining in the first place.

When to default to a native-AI platform

This is where a platform like Coommit fits. Video, a real-time collaborative canvas, and an AI assistant that sees the canvas and the conversation — all in one workspace, with no external bot in the room. For deeper reading on the tradeoffs, our comparison of unified versus split-stack meeting collaboration walks through when native AI beats third-party integrations.

9. Communicate the Rule, Enforce the Consequence

The final control is the one most orgs skip, and it's why policies fail.

Write a one-paragraph internal announcement and send it to every employee. Post it in Slack. Put it in the new-hire onboarding. The message: starting this month, we block AI notetakers that haven't been explicitly approved, and we do it for consent, confidentiality, and trust reasons. Here are the three tools we approve. Here's how to request an exception. Here's what happens if you bring an unapproved one.

Then actually enforce it. The first time someone brings an unapproved notetaker to an external meeting, the host kicks it on the spot, the employee gets a quick note from IT, and the incident gets logged. The second time triggers a manager conversation. By the third, you've probably caught a pattern worth escalating.

Enforcement is where most no-bot policies quietly die. Don't let yours be one of them.

What Changes Next

The two platform-level shifts from March and April 2026 — Google Meet's default-deny bot policy and Microsoft Teams' external notetaker detection — are just the beginning. Expect Zoom to follow within the next two quarters, and expect Meet and Teams to tighten further as native AI (Gemini's Workspace Studio, Teams Copilot) becomes the default transcription layer.

The implication for admins: the window to block AI notetakers cleanly is open right now. The controls above aren't permanent fixes — they're the staging layer while the industry sorts itself out. Twelve months from now, the question won't be "how do I block external bots" but "which native AI do we trust with our meeting data?" Get the policy foundation, platform defaults, and OAuth hygiene right today, and you'll be ready for whatever the next wave brings.

If you want to sidestep most of this complexity, move your sensitive meetings to a platform with native AI and a built-in canvas, and reserve the bot-heavy platforms for low-stakes external calls. That's the architecture Coommit is built around — and it's why teams are quietly migrating away from bot-stuffed meetings in the first place.