Will AI Save Black Mothers — Or Ignore Them? Embedding Bias Isn't a Tech Problem. It's a Crisis.
- Team MDA Solutions LLC

- Apr 29
- 4 min read

Published by Michele D. Alexander | MDA Solutions LLC
Let me start with the numbers.
In the United States:
41 Black women die per 100,000 births
12 White women die per 100,000 births
That is more than three times the risk. And this gap exists regardless of education, income, or access to care.
I am a Black woman. I almost died in childbirth.
This is not a policy position for me. This is personal.
I cannot have any more children because of what happened. But I have a daughter. And I am a healthcare consultant who has spent over 20 years working at the intersection of clinical operations, quality improvement, and technology. So when the healthcare industry began embedding AI into EHR systems, I had one question that I could not stop asking:
Are we using this technology to find and close these gaps — or are we scaling them?
The Data Already Exists. The Question Is What We Do With It.
Black and African American is a data point in every EHR system. It is already there. And it matters — because it can trigger Clinical Decision Support System (CDSS) alerts that give clinicians the opportunity to identify risk earlier, document it, and act on it.
That is a starting point. Not a solution — but a starting point.
The problem is that data only creates change when the systems built around it are designed to respond. Right now, across too many healthcare organizations, that alert can be generated and ignored. The risk can be documented and dismissed. The data exists, but the workflow around it does not consistently translate that data into safer outcomes.
That is an operations problem. It is a quality problem. And it is a problem that AI, if implemented without intention, will make worse — not better.
This Is Bigger Than OB
Black maternal mortality is the most visible data point, but the pattern it reflects runs through every area of healthcare:
How risk is identified. Are clinical decision support tools trained on data that reflects the full patient population — or are they calibrated to a narrower norm that leaves certain patients systematically underscored?
How patients are heard. Pain reporting, symptom documentation, and care escalation decisions all carry the potential for bias. AI tools trained on existing clinical data inherit the biases embedded in that data.
How decisions are made. When an algorithm flags risk, who acts on it? When it doesn't flag risk that a clinician suspects, who overrides it? These are workflow and governance questions that no technology vendor can answer for your organization.
What I'm Building
The NIH Stroke Scale (NIHSS) is one of the most practical, real-world tools in clinical medicine. It is a standardized, structured approach to identifying stroke severity that gives clinicians a common language and a documented basis for intervention.
I am developing an application that asks a direct question: can that kind of structured, risk-detection framework be built for labor and delivery?
The application is focused on three things: quality, workflow, and risk detection. The goal is to give clinicians a structured tool that surfaces risk earlier, documents it consistently, and creates a record that supports both better care decisions and organizational accountability.
Patients should not have to rely on luck to survive childbirth in 2026. That is not a standard any healthcare organization should be comfortable accepting.
What This Means for Healthcare Leaders
If your organization is implementing or expanding AI in clinical workflows, these are the questions that need to be on your agenda:
How was the model trained, and whose data is reflected in it?
What populations are systematically underrepresented in your outcomes data — and how does that affect what the AI learns?
What governance structure ensures that CDSS alerts are acted on consistently?
Who in your organization is accountable for monitoring AI-driven recommendations for equity gaps?
These are not hypothetical concerns. They are operational requirements for any organization serious about patient safety and quality improvement.
We Need More Than Awareness. We Need Action.
Awareness is where this conversation starts — not where it ends. For healthcare organizations, the work is translating awareness into operational change: in how workflows are designed, how AI is governed, how quality data is reviewed, and how the people delivering care are trained and supported.
At MDA Solutions, this work sits at the center of what we do. If your organization is navigating AI implementation, health equity strategy, or quality improvement and you want a partner who brings both operational expertise and a personal understanding of what is at stake — we should talk.
For more on the policy landscape around Black maternal health, watch Rep. Summer Lee challenge RFK Jr. on DEI cuts and their impact on Black maternal mortality.
Originally published on LinkedIn. Follow MDA Solutions LLC on LinkedIn for ongoing insights on healthcare AI, quality improvement, and operational readiness.
Tags: Black Maternal Health | Health Equity | Healthcare AI | Patient Safety | Clinical Decision Support | Quality Improvement | Healthcare Disparities | EHR Optimization




Comments