HIPAA GuideUpdated February 2026

Is ChatGPT HIPAA Compliant? The 2026 Answer for Therapists

No. Not on the plans most therapists use. Here is the exact compliance picture, what OpenAI actually says, and what HIPAA requires before you type a single word of session content into any AI tool.

12 min readBuilt by a therapist

Quick Answer

No. ChatGPT is not HIPAA compliant. OpenAI does not offer a Business Associate Agreement (BAA) for ChatGPT Free or Plus, which means using either plan with protected health information violates HIPAA. OpenAI does offer a BAA for ChatGPT Enterprise and the API, but not for the standard plans most therapists use.

The short answer, by plan

The compliance picture depends entirely on which ChatGPT product you are using. These are different products with different terms, and most therapists are on the ones that offer no HIPAA protection at all.

ChatGPT Free

Not HIPAA compliant

No BAA available. OpenAI may use conversations to improve models. Entering client PHI here is a HIPAA violation.

ChatGPT Plus ($20/mo)

Not HIPAA compliant

No BAA available. Conversations are still subject to OpenAI's standard privacy terms. Same HIPAA exposure as the free plan.

ChatGPT Enterprise

BAA available (with conditions)

BAA is available, but requires volume pricing (approximately $30+/user/month), IT setup, legal review, and organizational controls. Most solo practitioners cannot access this tier.

OpenAI API

BAA available for developers

BAA is available for developers building on the API. If you are using a third-party therapy tool built on the OpenAI API, compliance depends on whether that tool has its own BAA in place with both OpenAI and with you.

The practical reality: The overwhelming majority of therapists who use ChatGPT are on the Free or Plus plans. Neither plan offers a BAA. This means the tool most therapists think of when they say "ChatGPT" is not an option for any client-related work.

What HIPAA actually requires

HIPAA (the Health Insurance Portability and Accountability Act) requires that covered entities, which includes therapists in private practice, only share protected health information with vendors who have signed a Business Associate Agreement.

A BAA is a legal contract. It requires the vendor to protect the PHI you share with them, report any breaches, not use the PHI for their own purposes (like training AI models), and delete data on your request. Without a signed BAA, you have no legal framework protecting the client information you share.

The BAA requirement applies to any tool that handles PHI on your behalf, regardless of how peripheral the use feels. "Just drafting notes" still counts. "Just generating worksheet prompts" still counts. If you type in session content that relates to a client and could identify them, you are handling PHI.

What counts as PHI

PHI is any information that relates to a patient's health and could be used to identify them. You do not need to include a name. Any of the following, in combination with health information, can constitute PHI:

Client names
Dates of service or specific appointment times
Geographic information smaller than state level
Ages (for individuals over 89)
Unique identifying numbers or codes
Presenting problems with distinctive detail
Specific life circumstances that identify the client
Session content tied to a real clinical encounter

Therapists sometimes ask: "What if I just remove the name?" This is a reasonable instinct, but it does not solve the HIPAA problem. A note about "a 34-year-old trauma survivor in the middle of a custody battle who is a nurse at Toronto General Hospital" identifies the client without using their name. HIPAA regulators look at the totality of the information, not just whether a proper name appears.

The safe assumption: any content drawn from a real client encounter is PHI until proven otherwise.

What OpenAI actually says

OpenAI is not vague about this. Their documentation is clear, if you read it carefully.

OpenAI's standard Terms of Service state that ChatGPT is not designed for use cases requiring HIPAA compliance. Their usage policies explicitly prohibit using ChatGPT for certain sensitive healthcare applications without appropriate enterprise arrangements in place.

OpenAI's default privacy settings for Free and Plus accounts allow them to use conversations to improve their models. This means client information you enter may be used as training data. Even if OpenAI anonymizes that data before using it in training, the act of transmitting PHI to a system without a BAA is already a HIPAA violation at the point of transmission, before any downstream use occurs.

OpenAI's own language on Enterprise vs. consumer plans

Consumer plans (Free, Plus)

No BAA. Conversations may be used for model training unless you opt out (and opting out is not a substitute for a BAA). Not designed for regulated healthcare data.

Enterprise plan

BAA available. OpenAI states that Enterprise conversations are not used for model training by default. Includes additional administrative controls. Requires volume commitment and IT configuration.

API (developers)

BAA available for API customers. API inputs are not used for training by default. Relevant for developers building healthcare apps on OpenAI infrastructure, not for therapists using ChatGPT directly.

OpenAI's enterprise privacy documentation (enterprise.openai.com) is the authoritative source for current terms. These policies do change over time. Verify directly before making any compliance decisions, and have legal counsel review before relying on an Enterprise BAA.

The BAA problem, explained plainly

A Business Associate Agreement is not a technicality. It is a signed legal contract between you (the covered entity) and any vendor who handles PHI on your behalf (the business associate). Without it, sharing PHI with a vendor is a HIPAA violation regardless of:

Whether any data breach ever occurs
Whether your intentions were good
Whether the vendor claims to "take security seriously"
Whether the vendor has received a SOC 2 certification
Whether you removed the client's name from the content

The BAA requirement exists to create accountability. When a vendor signs a BAA, they agree to specific obligations: they will protect the PHI, they will not use it for their own purposes, they will report breaches to you within specific timeframes, and they will delete the data when the relationship ends. These are legally enforceable commitments, not marketing claims.

The "nobody has been caught" logic does not hold. HIPAA compliance is not about whether you will be audited. It is about what liability you are carrying. If a client's data is later part of a breach at OpenAI or any vendor, and you had entered that data without a BAA, you are personally liable under HIPAA. The breach does not need to be your fault for the liability to attach.

A useful analogy: driving without car insurance is technically possible and most drivers will never get into a serious accident. But the risk structure is fundamentally different from driving insured. The compliance question for HIPAA is not "what is the probability I get caught?" It is "what am I liable for if something goes wrong?"

For most therapists, the answer to that second question, combined with the simplicity of choosing HIPAA-compliant alternatives, makes continued ChatGPT use with client data a liability they do not need to carry.

What about ChatGPT Enterprise? Is it actually HIPAA compliant?

ChatGPT Enterprise does offer BAA capability. This is a meaningful distinction from the consumer plans, and it is worth understanding clearly.

With Enterprise, OpenAI commits to: not using your conversations to train models, providing data processing agreements, supporting HIPAA compliance frameworks, and offering additional administrative controls like SSO, audit logs, and custom retention settings. A signed BAA with OpenAI Enterprise is a legally enforceable HIPAA compliance instrument.

The practical barriers for solo practitioners

Cost

ChatGPT Enterprise is priced for organizations, not individual practitioners. Pricing is typically negotiated with volume commitments. Published figures start around $30 per user per month, but actual pricing requires direct sales engagement. For a solo practice, this puts the monthly cost at or above many purpose-built HIPAA-compliant therapy tools.

Setup requirements

Enterprise requires IT configuration: SSO setup, admin console management, domain verification, and policy configuration. Most solo practitioners do not have IT staff and are not equipped to manage enterprise software procurement and configuration on their own.

Legal review required

A BAA is only as protective as the terms within it. Enterprise BAAs are complex documents. Having legal counsel review the agreement before signing is prudent and adds to the cost and timeline of adoption.

Data still leaves your device

Even with a BAA, ChatGPT Enterprise sends your prompts to OpenAI's servers for processing. This is fundamentally different from zero-retention architecture, where data never persists. "HIPAA compliant by contract" and "HIPAA compliant by architecture" are two different compliance postures with different risk profiles.

Compliance by contract vs. compliance by architecture

There are two fundamentally different models for HIPAA-compliant AI tools. Understanding which category a tool falls into changes the risk profile significantly.

Compliant by contract (BAA model)

Your data goes to the vendor's servers. The vendor has promised (in writing) to protect it. You are protected by the legal agreement if something goes wrong. Examples: ChatGPT Enterprise, Quill, Mentalyc.

Compliant by architecture (zero-retention model)

Your data is processed in-memory and never stored. There is no retained data to breach. The protection is built into the system design, not into a legal contract. Example: Reframe Practice.

For most therapists, especially those in solo or small group practice, purpose-built HIPAA-compliant therapy tools offer a better risk profile at a lower cost and complexity than configuring ChatGPT Enterprise for healthcare use.

The audit risk: what enforcement actually looks like

HIPAA enforcement is handled by the HHS Office for Civil Rights (OCR). OCR investigates potential violations through two primary channels: complaints filed by individuals (including clients) and breach reports submitted by covered entities when an incident occurs.

Random audits do exist, but most enforcement actions begin with a breach or a complaint. This is the key insight: you are not safe just because no audit is scheduled. You are exposed at the moment something goes wrong, and at that point, the question is whether you had the required safeguards in place.

HIPAA civil penalty tiers (2024 OCR schedule)

Did not know (and could not have known)
$100 to $50,000 per violation
$25,000 per year per category
Reasonable cause (not willful neglect)
$1,000 to $50,000 per violation
$100,000 per year per category
Willful neglect (corrected within 30 days)
$10,000 to $50,000 per violation
$250,000 per year per category
Willful neglect (not corrected)
$50,000 per violation
$1.9M per year per category

Using a tool without a BAA because you did not know it was required would likely fall under "reasonable cause" or "willful neglect," depending on how long the practice continued. Ignorance of the BAA requirement is not a defense for a licensed mental health provider, who is a covered entity subject to HIPAA.

The APA data point: A 2025 APA survey found that 67% of psychologists cite data breaches as their number one concern about AI tools in clinical practice. The concern is well-founded. The gap between "my AI tool seems fine" and "my AI tool has a BAA and I understand what that means" is where most HIPAA exposure lives.

Criminal penalties are a separate category and reserved for intentional violations. For the typical therapist who used ChatGPT in good faith without understanding the BAA requirement, civil penalties are the relevant risk, and they are substantial enough to take seriously.

What therapists use instead

The good news: HIPAA-compliant AI options exist for every major clinical workflow. The question is which compliance model you prefer.

For progress notes, the strongest options are tools purpose-built for therapy documentation. For worksheet and material generation, the full comparison of ChatGPT alternatives for therapists covers the major players. The summary below focuses on the compliance picture.

ToolBAA availableData storageSolo-friendlyNotes / Worksheets
ChatGPT Free/PlusNoRetained (may train models)Yes (but not compliant)General AI
ChatGPT EnterpriseYesRetained (not for training)Not practicalGeneral AI
QuillYesRetained with BAAYesNotes only
MentalycYesRetained with BAAYesNotes only
Reframe PracticeYes (via Google Vertex AI)Zero retention (in-memory)YesNotes + Worksheets

The distinction between "retained with BAA" and "zero retention" matters in practice. When a vendor retains your session data, even under a BAA, that data exists on their servers. It is protected by contract, but a breach of their systems could still expose it. Zero-retention architecture removes the data from the equation entirely. There is nothing to breach because nothing is stored.

Reframe Practice: HIPAA compliant by architecture

Reframe Practice generates worksheets, progress notes, session prep materials, and treatment plan drafts. All processing happens in-memory using Google Vertex AI (with a signed BAA). Nothing is stored on Reframe's servers after your request completes. If you want to verify this yourself, open your browser's network inspector while generating a worksheet. Watch what gets sent and what comes back.

Try it free

Not an endorsement that any particular tool is right for your practice. HIPAA compliance is a floor, not a ceiling. Review BAA terms directly, consult your licensing body's guidance on AI tools, and make decisions appropriate to your jurisdiction and client population.

Free: Documentation Time Audit

Track exactly where your documentation hours go each week. A simple self-audit worksheet used by over 400 therapists to find recoverable time in their practice admin.

Free download. No spam. Unsubscribe anytime.

Frequently asked questions

Is ChatGPT HIPAA compliant in 2026?

No. ChatGPT (Free and Plus plans) is not HIPAA compliant. OpenAI does not offer a Business Associate Agreement (BAA) for standard ChatGPT plans. Using ChatGPT with protected health information on these plans violates HIPAA. ChatGPT Enterprise does offer BAA capability, but it requires volume pricing and IT/legal setup that most solo practitioners cannot access.

Can I use ChatGPT for therapy notes without violating HIPAA?

Not on the standard Free or Plus plans. Any session content you type into ChatGPT that could identify a patient, including presenting problems, symptoms, or clinical details, counts as protected health information (PHI). Entering PHI into a system without a BAA is a HIPAA violation regardless of intent or whether a breach occurs.

Does OpenAI offer a BAA for ChatGPT?

OpenAI offers a BAA for ChatGPT Enterprise and for developers using the OpenAI API. Standard ChatGPT plans (Free and Plus) do not include BAA capability. This is clearly stated in OpenAI's enterprise privacy documentation. Most therapists use the Free or Plus plan, which means most therapists using ChatGPT with client data are out of compliance.

Is ChatGPT Enterprise HIPAA compliant?

ChatGPT Enterprise does offer Business Associate Agreement capability, which is the foundational requirement for HIPAA compliance. However, HIPAA compliance also depends on how your organization configures data controls, access permissions, and breach response. Enterprise starts at approximately $30 per user per month with volume minimums, making it inaccessible to most solo practitioners.

What happens if I use ChatGPT with client data?

Using ChatGPT (Free or Plus) with protected health information creates personal HIPAA liability. HIPAA penalties range from $100 to $50,000 per violation, with annual caps up to $1.9 million per violation category. Most violations surface during breach investigations rather than random audits. If a data incident later involved OpenAI's systems and you had entered client data without a BAA, you would have no legal protection.

Can I use ChatGPT if I don't include the client's name?

No, this is a common misconception. HIPAA defines protected health information as any data that could identify a patient in combination with health information. You do not need to include a name for information to qualify as PHI. Unique clinical details, a specific combination of presenting problems, a distinctive life situation, or session-specific content can all be PHI even without a name attached.

Is the ChatGPT API HIPAA compliant?

OpenAI does offer a Business Associate Agreement for API customers. Developers building HIPAA-compliant applications on top of the OpenAI API can enter into a BAA. However, the BAA is with the developer, not the end user. If you are using a third-party tool built on the ChatGPT API, HIPAA compliance depends on whether that third-party tool has its own BAA with OpenAI and whether it offers a BAA to you as the covered entity.

What AI tools for therapists are actually HIPAA compliant?

Tools with HIPAA compliance fall into two categories. BAA-based compliance: the vendor signs a Business Associate Agreement with you and promises to protect your data. This includes tools like Quill, Mentalyc, and ChatGPT Enterprise. Architecture-based compliance: data never leaves your device or is processed in-memory and immediately deleted. Reframe Practice uses zero-retention architecture with a Google Vertex AI Business Associate Agreement. Data is processed and immediately discarded.

Is Google Gemini HIPAA compliant for therapists?

Standard Google Gemini (the consumer product at gemini.google.com) is not HIPAA compliant for therapist use. Google Workspace with Healthcare add-on does support BAA capability, and Google Cloud services (Vertex AI) can be used under a BAA. The same logic applies to Gemini as to ChatGPT: the consumer-facing product lacks the BAA infrastructure required for HIPAA compliance.

How do I know if an AI tool is HIPAA compliant?

Ask three questions before using any AI tool with client data. First: does the vendor offer a signed Business Associate Agreement? If they say 'we take security seriously' without offering a BAA, that is not HIPAA compliance. Second: does the tool retain your data after processing? A BAA helps, but a vendor who retains all your session notes creates ongoing risk even with legal protections in place. Third: who is the underlying AI provider and do they have a BAA?

Related guides