What Is HIPAA Compliance for AI Tools in Healthcare?
- Team MDA Solutions LLC

- 2 days ago
- 9 min read

By Michele D. Alexander | MDA Solutions LLC
The Direct Answer
HIPAA compliance for AI tools means that any artificial intelligence system touching protected health information (PHI) must meet the same legal and operational standards as any other covered technology — with additional governance requirements that most healthcare organizations are not yet prepared for.
If an AI tool accesses, processes, summarizes, generates, or transmits PHI, it is subject to the HIPAA Privacy Rule, the Security Rule, and the Breach Notification Rule. The vendor must sign a Business Associate Agreement (BAA). The organization must maintain audit logs. Access must be controlled, traceable, and limited to what is necessary for a legitimate purpose.
That is the floor. Most healthcare AI deployments are not even at the floor yet.
Why This Question Is Urgent in 2026
Healthcare AI adoption has accelerated faster than governance has caught up. According to the American Hospital Association, more than 80% of health systems are now using or piloting AI tools in some part of their operations. Clinical documentation, prior authorization, risk stratification, diagnostic imaging, billing — AI is touching all of it.
The problem is not the technology. The problem is that many organizations adopted these tools the same way they adopted EHRs in the early 2000s: quickly, under pressure, with compliance treated as something to handle later.
Later is now.
The HHS Office for Civil Rights (OCR) has signaled increasing attention to AI-related privacy and security gaps. State attorneys general are moving faster than federal regulators in some cases. And plaintiffs' attorneys are paying close attention to AI-generated documentation errors that make their way into medical records.
Healthcare leaders who are still treating AI compliance as an IT issue are operating with the wrong frame. This is an organizational risk issue, and it lands in the C-suite.
The HIPAA Rules That Apply to AI — Specifically
The Privacy Rule (45 CFR Part 164, Subpart E)
The Privacy Rule governs how PHI can be used and disclosed. For AI tools, the relevant questions are:
Minimum necessary standard: Is the AI tool accessing only the PHI required to perform its function? A billing AI that pulls full encounter notes when it only needs diagnosis codes is a compliance gap.
Patient authorization: Has the patient been informed when AI is being used in their care? Is that disclosure documented?
Permissible purposes: Is the AI using PHI for treatment, payment, or healthcare operations — or is it being used for model training, product improvement, or research without proper authorization?
This last point is where many vendor contracts create hidden exposure. Read the data use provisions carefully. If a vendor's contract allows them to use your patients' data to improve their model, that may require patient authorization that you have not obtained.
The Security Rule (45 CFR Part 164, Subpart C)
The Security Rule requires administrative, physical, and technical safeguards for electronic PHI. For AI tools, the specific requirements that most often create gaps are:
Access controls (§164.312(a)): Who or what can access the PHI the AI uses? Are access rights role-based and reviewed regularly?
Audit controls (§164.312(b)): Can the organization produce a log of what the AI accessed, when, and why? This is the requirement most AI implementations fail.
Integrity controls (§164.312(c)): Can the organization verify that AI-generated or AI-modified data has not been altered inappropriately?
Transmission security (§164.312(e)): Is PHI encrypted when it moves between your systems and the AI vendor's infrastructure?
Business Associate Agreements
Every AI vendor that handles PHI on your behalf must sign a BAA. This is non-negotiable under 45 CFR §164.308(b). The BAA must specify how the vendor uses PHI, what security standards they maintain, how they notify you of a breach, and what happens to PHI when the relationship ends.
What most healthcare leaders do not know is that a signed BAA does not mean a vendor is compliant. It means they have accepted contractual responsibility. You still need to verify. A vendor who tells you they are "HIPAA compliant" without being able to show you their security documentation, audit log capabilities, and breach notification procedures is making a marketing claim, not a compliance statement.
EHR "Certification" vs. HIPAA Compliance
This is one of the most persistent misconceptions I encounter in the field. EHR vendors frequently advertise that their systems are "certified," and healthcare leaders assume that means HIPAA-approved. It does not.
EHR certification typically refers to ONC Health IT Certification, which focuses on interoperability and standards-based data exchange, specific functionality like e-prescribing, clinical quality measure reporting, and APIs, and compliance with 21st Century Cures Act information-blocking criteria.
ONC certification says nothing about HIPAA. A covered entity using an ONC-certified EHR still must perform a HIPAA Security Risk Analysis on the full implementation, configure access controls, audit logs, encryption, and backup systems, and maintain policies, procedures, and workforce training around how the EHR is used.
An ONC-certified EHR can be used in a HIPAA-compliant manner — but the government has not certified it as HIPAA compliant. That distinction matters enormously when AI tools are layered on top of or integrated with your EHR. The EHR's certification does not extend to the AI features. Each integration must be evaluated on its own.
The 5 Most Common HIPAA AI Compliance Gaps
After 20+ years in healthcare operations and compliance, these are the gaps I see most consistently:
No BAA, or a BAA that does not actually cover the AI use case. The contract may have been signed before the vendor added AI features. The original BAA may not cover model training or output storage.
PHI in prompts. Staff are using general-purpose AI tools — ChatGPT, Gemini, Microsoft Copilot without enterprise licensing — and entering patient information into prompts. This is a HIPAA violation. It happens constantly, and most organizations do not know it is occurring.
AI output saved directly into the medical record without clinician review. Ambient documentation tools, AI-assisted coding tools, and clinical decision support systems generate output that sometimes goes directly into the EHR. When errors enter the record without review, they become part of the legal health record. The liability follows.
No audit trail for AI-accessed data. When OCR investigates a breach or a complaint, they ask: who accessed what, when, and why? If the organization cannot answer that question for its AI tools, it cannot demonstrate compliance.
Vendor due diligence that stops at the contract. Signing a BAA and checking a compliance certification box is not due diligence. It is paperwork. Real due diligence includes reviewing the vendor's security architecture, their subprocessor list (who else touches your PHI), their breach history, and how they handle PHI after contract termination.
What Healthcare Organizations Need to Do Now
This is not a list of aspirational best practices. These are the minimum steps for an organization that is using AI tools and wants to be defensible in an audit or investigation.
Step 1: Inventory your AI tools. You cannot govern what you cannot see. Conduct an AI inventory — including tools adopted at the departmental level without centralized IT review. Include AI features embedded in existing software (many EHRs now have AI documentation features that activated automatically with an update).
Step 2: Review every BAA. Pull the BAAs for every vendor on your AI inventory. Verify that the BAA covers the AI use case, that it addresses PHI in model training, and that it includes breach notification timelines that meet your obligations.
Step 3: Assess your audit log capability. For each AI tool, determine whether you can produce a log of PHI access. If you cannot, that is a Security Rule gap.
Step 4: Establish a PHI-in-prompts policy. Create a clear, written policy on what staff may and may not enter into AI tools. Train on it. Monitor compliance. This is one of the highest-risk gaps and one of the easiest to address with policy and training.
Step 5: Build AI governance into your compliance program. Assign accountability. Create an AI governance committee or expand your existing compliance committee's charter to include AI. Establish a review process for new AI tools before adoption.
How to Know If Your Organization Is Actually Compliant
Ask your team these questions. The answers will tell you where you stand.
Can you name every AI tool that touches PHI in your organization?
Does every one of those vendors have a signed, current BAA?
Does your BAA cover PHI use for model training and improvement?
Can you produce an audit log for AI-accessed PHI?
Do you have a written policy on AI tool use by staff?
Have clinicians been trained to review and correct AI-generated documentation before it enters the record?
Does your organization have a process for AI-specific incident response?
If the answer to any of those questions is "I'm not sure," that is your starting point.
A Word on Vendor Claims
I have sat across from vendors who told me their product was HIPAA compliant and then, in the same conversation, clarified that they did not handle BAAs or governance. I have reviewed contracts where the data use provisions allowed the vendor to use PHI for product improvement without explicit authorization.
"HIPAA compliant" is not a certification. There is no federal body that certifies AI tools as HIPAA-compliant. The claim tells you nothing about the vendor's actual practices. What tells you something is their BAA, their security documentation, their audit capabilities, and their breach history.
Ask for all of it. If they cannot provide it, that is your answer.
The Bottom Line
HIPAA compliance for AI tools is not a technology problem. It is a governance problem. The organizations that are managing it well have done three things: they know what tools they are using, they have verified that every vendor relationship is properly documented and governed, and they have built internal policies that address the specific risks AI introduces — prompt input, output review, audit trails, and patient disclosure.
The organizations that are struggling have treated AI adoption as a procurement decision and compliance as a checkbox. In 2026, that approach is no longer survivable.
How MDA Solutions Can Help
MDA Solutions LLC offers HIPAA AI compliance assessments for healthcare organizations at every stage of AI adoption — from initial inventory to full readiness assessment and remediation roadmap.
HIPAA AI Quick Scan — from $1,500: A 60-minute review of your current AI tool landscape, a risk scorecard covering your top compliance gaps, and a readout session with prioritized recommendations.
HIPAA AI Readiness Assessment — from $4,500: A full diagnostic covering BAA review, audit log capability, PHI-in-prompts exposure, vendor due diligence gaps, and a written report with a remediation roadmap.
Implementation Sprint — Custom: Hands-on support to close the gaps identified in your assessment.
Frequently Asked Questions
Does HIPAA apply to AI tools that only process de-identified data? Properly de-identified data under 45 CFR §164.514 is not subject to the HIPAA Privacy Rule — but "de-identified" has a precise legal definition that most organizations do not fully understand.
Under HIPAA, data only qualifies as de-identified if it no longer points to a specific individual and there is no reasonable basis to believe the information could be used to identify that person. There are two recognized methods for achieving this. The first is Safe Harbor, which requires removal of all 18 direct identifiers specified in the rule — names, dates, geographic data below the state level, phone numbers, email addresses, Social Security numbers, and more. The second is Expert Determination, which requires a qualified statistician or data scientist to formally analyze the data and document that the risk of re-identification is very small.
Removing a patient's name is not de-identification. Replacing a name with an ID number is not de-identification. If your dataset has not been evaluated against one of these two standards, treat it as PHI. If you are not certain, that uncertainty is your answer — treat it as PHI until a formal assessment says otherwise.
Does a BAA make an AI vendor HIPAA compliant? No. A BAA transfers contractual responsibility and creates legal accountability — but it does not mean the vendor's practices are compliant. You remain responsible for vendor oversight. A BAA is necessary but not sufficient.
Can we use ChatGPT or general AI tools with patient data? Not without enterprise licensing and a signed BAA. Consumer versions of AI tools — including ChatGPT, Google Gemini, and others — are not HIPAA-compliant environments. Using them with PHI is a violation regardless of how the prompt is worded or whether names are removed.
What happens if an AI tool causes a HIPAA breach? The covered entity — your organization — is responsible for breach notification under 45 CFR §164.400. Depending on the number of individuals affected, you may be required to notify affected individuals, HHS, and potentially the media. If the breach resulted from vendor failure, you may have contractual remedies under your BAA, but the notification obligation is yours.
How often should we reassess our AI compliance posture? At minimum, annually — and any time you adopt a new AI tool, a vendor updates their platform in ways that change how PHI is processed, or a regulatory guidance document is issued. AI is moving fast enough that a once-every-three-years review cycle is not adequate.
Michele D. Alexander is the founder of MDA Solutions LLC, a healthcare consulting practice specializing in compliance, EHR optimization, quality improvement, and AI readiness for clinics, hospitals, and behavioral health organizations. She has more than 20 years of healthcare operations experience and was featured in Benzinga discussing responsible AI adoption in healthcare.
MDA Solutions LLC | mdasolutionsllc.com | New York, NY




Comments