top of page

Where Is “Break the Glass” Now?


Healthcare AI, EHR access, and the new liability question

By Michele D. Alexander — MDA Solutions LLC


A vendor recently told me their product was compliant. Then, in the same conversation, they clarified that they didn’t really handle governance or Business Associate Agreements.

That was the moment I had to stop the conversation.

In healthcare, compliance and governance are not separate categories. If a tool touches protected health information, supports clinical decision-making, summarizes encounters, or sits anywhere inside a clinical workflow, then governance, access controls, auditability, and BAAs are not side issues. They are the foundation.

That vendor exchange points at something larger that healthcare leaders may think about, whether, clinical administrative or tech need to take seriously right now: as artificial intelligence enters more clinical and administrative workflows, the old idea of “break the glass” access has expanded and so has the question of who is responsible when something goes wrong.


What “break the glass” originally did?

It was a warning. There was a time when “break the glass” in healthcare technology had a fairly clear meaning. A clinician or authorized staff member needed emergency access to information they normally could not see. The system created a warning, required the user to acknowledge a reason for access, opened the record, and then logged the activity for later review.


In plain English: the glass could be broken, but someone needed to know who broke it, why, what they accessed, and whether the access was justified.



The HIPAA Security Rule reflects this. Covered entities and business associates are required to establish procedures for obtaining necessary electronic protected health information during an emergency. The point was never to lock everything down so tightly that care suffered. The point was to give the right people emergency access when needed and make that access traceable, reviewable, and accountable.

Break-glass access was never a free pass. It was an exception pathway. Exceptions require oversight.


Where “break the glass” is now

The original concept assumed a human was on the other side of the glass. Today, healthcare organizations have more digital layers sitting between the patient, the clinician, the staff member, the payer, and the record. AI tools may summarize encounters, draft notes, flag risks, predict deterioration, suggest billing codes, assist with prior authorization, or surface information that previously required a human to retrieve.

That means the question is no longer just “Did the human access the chart?” Leaders now need to ask:


Did the AI tool access the record? Did it summarize or alter the meaning of the information? Did the clinician review the output? Was the patient aware the tool was being used? Did the organization audit what happened afterward? Did a delay, denial, documentation error, or clinical harm occur as a result?


The glass is no longer just a chart. It is a chain of decisions.


AI changed the liability conversation

Ambient AI scribes are a useful example. These tools can reduce documentation burden and help with clinician burnout. They can also create new exposures. Reuters reported in January 2026 that ambient scribes are opening new legal and regulatory frontiers — particularly around consent, accuracy, privacy, and clinician oversight.


Consider a straightforward scenario. An AI-generated note states the patient denied chest pain. The patient actually reported it. Who is responsible?


The answer depends on facts, contracts, state law, the consent process, the documentation policy, and whether the clinician had a meaningful opportunity to review and correct the output.


Operationally, the safer leadership posture is straightforward: treat every AI touchpoint in the clinical workflow as a potential audit event. Document the consent process. Ensure clinicians have a meaningful opportunity to review and correct AI output. And when something goes wrong, be able to reconstruct exactly what happened and why.

Does governance slow innovation? It is a fair question. I was recently in a webinar on AI and healthcare research where someone raised it directly — suggesting that patient consent requirements were a barrier, and that AI could move faster without them. The question was whether medical necessity could justify accessing patient data for research without consent.


Whoa.


That question deserves a direct answer: governance is not the obstacle. Ungoverned access is.


The same frameworks that protect patients also protect organizations. When access is traceable, justified, and documented, it is defensible. When it is not, the liability follows


Who needs to be compliant

That is an interesting question.  The Billing department, your Health tech team, clinicians, ancillary staff, volunteers and 3rd party solutions.

When any of these groups cannot speak fluently about governance, BAAs, audit trails, data retention, or PHI access, the gap does not disappear. It transfers to the healthcare organization that bought the tool.  Governance can help change the understanding of this.


The new questions healthcare AI must answer

In a traditional EHR environment, audit logs answered familiar questions: Who opened the record? When? What did they view? Some may have been responsible for following up. What was the diagnosis What was the treatment plan, is there a signed note, etc. So why did you break the glass?  What was the reason? Was access justified?

AI introduces an additional layer. Organizations now need to be able to answer: What data did the AI tool use? Was the AI output saved into the medical record? Did the clinician accept, reject and document benefit out ways risk, or edit the output? Was the source information traceable? Was the recommendation explainable enough to support clinical judgment? Was the patient informed when AI was used in the encounter? Was the tool monitored after implementation?


Recent research on auditable clinical AI emphasizes that well-designed audit logs support transparency, integrity, and evidentiary review. The point for healthcare leaders is simpler: if you cannot reconstruct what happened, you may struggle to defend the process.


Minimum necessary still matters

The HIPAA minimum necessary standard requires that PHI not be used or disclosed when it is not necessary to accomplish a task or purpose. There are exceptions,


  • Treatment

  • Payment

  • ·Health care operations

  • Public health activities

  • Health oversight activities

  • Judicial and administrative proceedings

  • Law enforcement purposes

  • Research

  • Serious threat to health or safety

  • Specialized government functions

  • Workers’ compensation

  • Required by law

  • Limited data set disclosures


As per 45 CFR but the standard remains a key Privacy Rule protection.


If AI tools are now accessing, summarizing, extracting, or generating content from patient information, then “necessary access” must be re-evaluated through that lens. And that’s where governance comes in.


So who is responsible?

I am not offering legal advice. But from a compliance perspective, responsibility typically does not sit in one place. It tends to follow the audit/decision trail. AI has made remarkable strides in medical imaging, diagnostics, early disease prediction, and documentation. The opportunity is real. But without workflow integration and governance, the risk is equally real. What healthcare organizations should do?


Think governance

Every healthcare organization using or considering a new AI upgrade should consider creating an AI governance committee.  Follow these simple steps:


1.     Clarify mission and scope

2.     Decide structure and reporting line.

3.     Pick the right mix of members.

4.     Write a one‑page charter.  

5.     Schedule a recurring meeting cadence and agenda

6.     Give the committee a starter work plan


A modern healthcare AI access and accountability review asks: Who or what can access PHI, and why? Does the patient need to know or agree? Where does the AI output enter the care process? Who validates the output before it is used? Can the organization reconstruct what happened? How are mistakes identified and corrected? Who owns policy, escalation, and monitoring? What do contracts, policies, and clinical standards actually say when something goes wrong?


This is no longer optional “nice to have” work. It is operational risk management.


The leadership takeaway

Break-glass access was never just about technology. It was about judgment, necessity, documentation, and accountability and AI hasn’t changed that.  Don’t be reactive.


About the author

Michele D. Alexander is the founder of MDA Solutions LLC, a healthcare consulting practice focused on workflow, quality, compliance, and AI readiness for clinics, hospitals, healthtech firms, and behavioral health organizations. She has more than 20 years of healthcare operations and EHR experience and was recently featured in Benzinga discussing responsible AI adoption in healthcare. MDA Solutions offers HIPAA AI Readiness Assessments, EHR optimization, healthcare strategy advisory, quality and compliance work, and executive coaching.


Sources

U.S. Department of Health and Human Services. HIPAA Security Rule — Technical Safeguards (45 CFR § 164.312), including emergency access procedures.

U.S. Department of Health and Human Services. HIPAA Security Rule — Administrative, Physical, and Technical Safeguards for ePHI.

U.S. Department of Health and Human Services. Business Associates and BAA Requirements; Direct Liability of Business Associates Under HIPAA.

U.S. Department of Health and Human Services. Minimum Necessary Standard, HIPAA Privacy Rule.

Reuters, Health Care Ambient Scribes Offer Promise But Create New Legal Frontiers, January 23, 2026.

An auditable and source-verified framework for clinical AI, peer-reviewed publication, 2026 (PMC).

 
 
 

Comments


bottom of page