From The Desk of

Matt dives into a specific healthcare topic to help those in the industry, and those outside of it, better understand the market drivers causing today’s healthcare challenges.
I get angry when I watch leaders use AI to distance themselves from the human cost of their decisions.
I have sat in executive conference rooms reviewing dashboards, trend lines, and utilization curves. I know the language. Loss ratios. Prior authorization volumes. Step therapy compliance. Medical trend. I have helped build strategy around those numbers.
Then I sat on an exam table holding imaging that showed neurological damage. I had documented loss of function. I had a physician ready to treat me. An algorithm overruled him in seconds.
My anger does not target the technology. It targets the intent behind how we deploy it.
THE EFFICIENCY TRAP
Right now, too many organizations deploy AI to optimize denial rates, compress short-term spend, and automate friction. They train models on historical claims data that already reflect cost-first decision pathways. Then they scale that bias across millions of lives.
In 2024, a Senate report criticized major insurers for limiting access to post-acute care. One large insurer saw its prior authorization denial rate for post-acute care jump from 10.9 percent in 2020 to 22.7 percent in 2022. Denied claims in 2022 ran nine times higher than in 2019.
The model predicted length of stay. When a patient exceeded the prediction, the system triggered a denial.
Executives call that efficiency.
Patients experience abandonment.
Here is the core issue. AI amplifies the incentive structure of the system that builds it. If the system rewards margin over outcomes, AI will protect margin. If the system rewards durable health and long-term trust, AI will accelerate that.
So I ask one question every time I see AI deployed in healthcare: When there is conflict, who does it serve?
If the answer tilts toward the balance sheet over the bedside, we upgraded the machinery that failed patients in the first place.
WHAT THE MACHINE DOES TO DOCTORS
When the algorithm overruled my physician, he looked at me and said, “I agree with you. I just cannot get it approved.”
That sentence captures the fracture in modern medicine.
He did not question the imaging. He did not doubt the loss of function. He understood what I needed. He also understood that the utilization management protocol would override his judgment.
He shifted from clinical reasoning to bureaucratic navigation. He talked about wording the note differently. About coding adjustments. About failing the next required therapy to satisfy the sequence.
Think about what that does to a physician.
You train for more than a decade. You take an oath. You carry the liability. You sit face to face with a patient. Then an algorithm built by people who will never examine that patient dictates the treatment pathway.
That creates moral injury.
Stanford and Mayo research shows that loss of autonomy and administrative burden strongly correlate with depression and intent to leave practice. Prior authorization ranks near the top of those stressors.
I did not see incompetence in that room. I saw resignation.
The machine saying no erodes authority. Patients see powerlessness. Trust fractures.
Over time, physicians adapt. They stop prescribing what they know will trigger resistance. They pre-edit themselves. They practice to the algorithm instead of to the patient in front of them.
That quiet shift changes care long before anyone publishes a study about it.

The patient voice remains the most untapped data source in healthcare…
THE METRICS THAT LIE
Healthcare leaders measure control.
Denial rates stable. Length of stay aligned with predictive targets. Cost per member per month is within range. Turnaround times shortened. Variation reduced. On a dashboard, that looks like operational excellence.
What those dashboards never capture:
The prescription never written because the physician anticipates friction.
The referral never placed because the administrative burden feels pointless.
The patient who gives up after the first denial and pays cash for something inferior.
The disease progression accelerates quietly because treatment started six months late. No metric captures clinical hesitation.
As of 2022, more than 80 percent of appealed Medicare Advantage prior authorization denials were overturned in favor of patients. Yet only about 0.2 percent of policyholders actually appeal.
Low appeal rates often signal fatigue, not agreement.
If you never measure what clinicians wanted to do but did not attempt, you convince yourself the system works. Meanwhile, the standard of care drifts downward.
WHO SITS IN THE ROOM
I will tell you who sits in the room when AI models get built. Data scientists. Actuaries. Product managers. Finance. Legal. Compliance. Sometimes a medical director who understands coding and risk adjustment deeply.
They define the target variable. Length of stay. Readmission probability. Likelihood a request exceeds policy criteria. They tune the model to reduce false approvals more aggressively than false denials because finance pressures them to control trend.
They rarely start with, “What outcome matters most to the patient in front of the clinician?”
They start with, “How do we reduce avoidable spend while protecting regulatory exposure?”
Who remains absent?
The patient who lost function because of a delayed approval.
The caregiver who navigated denials while working full time.
The nurse who sees decline before any claim code reflects it.
When you exclude lived experience, you narrow the definition of harm. In those rooms, harm means overspending relative to forecast. It rarely means prolonged pain, lost income, or eroded trust.
WHAT PHYSICIANS SAY IN PRIVATE
When the door closes, physicians tell me something simple. They do not trust the intent.
In implementation meetings, they nod. In private, they say, “I chart for the machine now.”
Nearly two-thirds of physicians reported using healthcare AI in 2024, a sharp increase from the prior year. Most cite administrative burden reduction as the biggest opportunity.
But trust requires believing the system stands behind your oath. One hospitalist told me, “If I know the model flags anything over four days, I start planning discharge on day three, whether the patient is ready or not.”
That sentence should make every executive pause.
Doctors also feel surveilled. AI tools benchmark ordering patterns and flag outliers. Physicians worry about being labeled high utilizer more than they worry about being clinically aggressive.
They sense that the tool serves the system first.
THE LIABILITY SHELL GAME
When harm occurs, the algorithm does not sit in a deposition chair. The physician does.
If a doctor follows AI guidance and harm occurs, plaintiff attorneys ask why. If the doctor overrides it and harm occurs, the institution asks why they ignored approved decision support.
The health system points to independent medical judgment.
The vendor cites disclaimers.
The payer references policy criteria.
The patient stands in the middle.
We distributed decision authority across software, finance, compliance, and clinical operations. We left accountability anchored to the clinician.
As AI adoption expands, the definition of standard of care may shift. Failure to use a widely adopted tool could get framed as deviation. Adoption increases exposure. Non-adoption increases exposure.
Either way, the physician signs the chart.
THE IMAGING AI REALITY
AI in imaging holds real promise. Pattern recognition at scale. Fatigue-free scanning. Faster detection.
Radiologists in trials detect lesions faster and identify more cases with AI support. Yet only 19 percent of health systems report high levels of success after deploying imaging AI tools.
Why?
Training data bias. Models built on curated academic datasets struggle in community settings.
Prevalence distortion. Validation sets often inflate disease rates, which skews perceived performance.
Workflow strain. More flags create more downstream work. Alert fatigue sets in.
Reimbursement focus. Systems measure throughput and billing alignment. Few track whether earlier detection translates into durable survival gains across populations.
AI can find more. The question is whether the system acts responsibly on what it finds.
I have held imaging that showed structural damage. The problem was never image quality. It was how the system responded to what it saw.
WHAT NEEDS TO CHANGE
AI should reduce administrative burden. It should surface social risk. It should flag gaps before crisis.
Instead, we often deploy it as a gatekeeper between a sick human and a treatment plan.
If we want a different future, we need structural change.
Include patients in AI governance with real authority.
Define shared liability frameworks that match distributed decision power.
Require transparent audit trails for every AI-influenced clinical decision.
Measure long-term functional outcomes, not just short-term spend.
Rebalance error tolerance to reflect clinical risk, not just financial exposure.
Align executive incentives with durable patient health.
The algorithm reflects the values of the people who build it.
Right now, too many rooms optimize for containment. I have seen the system from the executive chair and from the exam table.
AI belongs in healthcare.
But only if we build it for the patient sitting in front of the clinician, not for the spreadsheet sitting, or in front of, on the board.
