Is that AI Tool Worthy?

ByBrent Martin
3 min read
Is that AI Tool Worthy?

*The personal subscription you brought to work probably has a privacy policy you have never read*

The personal subscription you brought to work probably has a privacy policy you have never read

If you took the class, you already know the move. Before you paste, classify. Green, yellow, red. Pause long enough to know what kind of data you are about to hand to a chatbot, and most of the bad outcomes never happen.

This post is a nudge on the part that does not get talked about enough: the tools sitting just outside the conversation about AI.

When people picture an "AI tool," they picture a chatbot. ChatGPT, Claude, Gemini, Copilot — something with a prompt box. Those are the tools your organization is most likely to have looked at, sanctioned, or at least had an opinion about. The bigger risk is usually the layer underneath: Grammarly catching everything you type in the browser, Otter sitting in your meetings, Kimi turning your notes into slides, Notion AI summarizing a page that happens to contain client information, Zoom's AI Companion quietly generating a transcript of a sensitive conversation. These tools are useful enough that people buy them out of pocket and bring them to work, and they often slip into the workflow without anyone formally deciding whether they should be there.

That is the gap worth paying attention to.

The personal subscription problem

Most of the tier-2 AI tools in your workday were not chosen by your IT team. They were chosen by you, because they solved a real problem and the free trial was good. That is not a character flaw — that is how productive people work. But useful does not mean safe, and a personal subscription does not come with the contractual protections that an enterprise agreement does.

A few patterns worth knowing about:

  • Always-on extensions. Grammarly's browser extension processes text in every field where you type — email drafts, chat messages, internal apps, HR systems. People forget it is running. The thing you installed to fix typos in personal email is also reading the client message you are composing right now.
  • Always-on listeners. Otter, Zoom AI Companion, Fireflies, and the rest of the meeting-bot category process every word spoken in a call. If a sensitive topic comes up, the transcript exists. It lives somewhere. Someone — possibly the vendor — can read it.
  • Always-on context. Tools like Notion AI and Kimi work on the page or document you point them at. The convenience comes from the fact that they can see the whole thing. The risk comes from the fact that they can see the whole thing.

None of these tools are villains. Used on green data, most of them are fine. But the same personal subscription that is great for drafting a wedding toast is the wrong place for the contract you are reviewing on Tuesday afternoon.

A 30-second check before you trust a tool with anything yellow

You do not need to read a whole privacy policy to make a decent call. You need a chatbot you already trust to do it for you. Open the tool's privacy policy or terms of service in one tab. Open a different chatbot in another. Paste the prompt below along with the policy text or a link.

This works because the question is not "is this tool good?" The question is "what does this specific tool, on this specific tier, actually commit to doing with my data?" That answer is buried in the terms, and a chatbot can pull it out in under a minute.

Two ground rules. Do not paste this into the tool you are evaluating — that is asking the salesperson if you should buy the product. And treat the output as a starting point, not a verdict. The chatbot is reading text. It cannot tell you whether the company honors what it wrote.

Copy this and keep it somewhere you can find it again:

I want to understand whether it is safe to enter sensitive or proprietary
information into this tool. Please assess:

1. Ownership
2. Use of submitted data
3. Confidentiality and IP protection
4. Retention and deletion
5. Security and compliance
6. Sharing and third parties
7. Risk assessment: Please rate the risk of using this website for each category:
   * Public information
   * Generic marketing copy
   * Internal brainstorming
   * Confidential business strategy
   * Customer or employee personal data
   * Source code
   * Legal, financial, or medical information
   * Patentable inventions or trade secrets
   Use risk levels: Low / Medium / High / Very High.
8. Practical recommendation: Give me:
   * A plain-English bottom line
   * The biggest red flags
   * The strongest protections, if any
   * What types of data are safe to enter
   * What types of data should not be entered
   * Any questions I should ask the vendor before using it for business purposes

Please cite the exact sections or language you rely on, and clearly distinguish
between what the policy explicitly says, what it does not say, and what you
are inferring.

The habit, restated

Run the prompt on the tools you actually use — not just the big chatbots, but the writing assistant, the meeting recorder, the slide generator, the note-taking app, the browser extension you forgot was installed. Especially the personal subscriptions you brought to work. Those are the ones most likely to be sitting on yellow and red data without an organizational agreement underneath them.

Before you paste, classify. Before you trust a tool with anything past green, check the terms. The prompt above turns checking from a chore into a thirty-second habit. That is the whole point.

When in doubt, leave it out. That is not fear. That is good judgment — and you already have it.


Took the class? Bring a teammate to the next one, or bring it to your team as a private cohort.

About the Author

Brent Martin

Brent is has 30 years of consulting experience, mostly with complex back office systems for big companies.