When AI decides who gets care first, who gets left behind?

In this episode of the Trust the Process Podcast, we unpack the ethics of Process Intelligence - why it’s not just about efficiency, but fairness, transparency, and trust. From biased hospital algorithms to the human decisions that still matter most, we explore how to build systems that help—without harm.

The following is a transcript of the podcast, edited and organized for readability.

<div style="width: 100%; height: 200px; margin-bottom: 20px; border-radius: 6px; overflow:hidden;"><iframe style="width: 100%; height: 200px;" frameborder="no" scrolling="no" allow="clipboard-write" seamless src="https://player.captivate.fm/episode/fee03682-53e6-4b0d-8e8a-a9fef3ceea2d"></iframe></div>

The emergency room algorithm that got it wrong

Imagine you’re sitting in a crowded Emergency Room.

You’ve been waiting for hours with chest pain, but you keep seeing others go ahead of you. Turns out, an algorithm is deciding who gets seen first.

That algorithm is prioritizing patients with the highest historical healthcare needs based on their past healthcare costs. Sounds objective, sure. But fair? Not at all.

Because here’s the twist: cost isn’t the same as need.

In 2019, researchers uncovered that a widely deployed algorithm in the U.S. healthcare system was systematically underestimating the needs of patients from historically underserved communities. Why? Because those communities, on average, had lower past healthcare costs—not because they were healthier.

Over 200 million people were impacted.

Once this bias was identified, adjustments were made to the algorithm to focus on clinical data rather than cost proxies. That led to a strong improvement: care for underserved groups increased from 17% to 47%.

That brings us to today’s topic: Responsible Process Intelligence.

Because when AI makes decisions that affect people’s lives - how do we make sure it makes the right decisions?

What does it mean to build Responsible Process Intelligence?

Welcome to Trust the Process, the podcast that demystifies the world of process mining and intelligence.

I’m Angela from the Celonis Academic Alliance, and today we’re diving into one of the most important—and often overlooked—questions in tech:

What does it mean to build Responsible Process Intelligence?

How do we make sure PI is not just efficient—but fair, transparent, and trustworthy?

To help us explore this, we’re joined by two experts who live and breathe it:

Wil van der Aalst, credited with founding the field of process mining
Vanessa Candela, Chief Legal Officer at Celonis and a leader in ethical AI

Chapter 1: What is Responsible PI?

Responsible Process Intelligence is part of the larger research field of Responsible AI. But it’s not just about legal compliance. It’s about building trust into the systems that shape decisions—from how a hospital discharges patients to how a supply chain prioritizes shipments.

Vanessa Candela:
“Responsible process intelligence means doing this in a way that aligns with ethical principles, sustainability goals of companies, and really fairness in decision-making and operations. Some key components to that: ethical AI and analytics, ensuring that AI-driven decisions and processes are unbiased, fair, and explainable… transparency and accountability, providing clear explanations for how decisions are made by intelligent systems… and then of course, human-centric design. There has to be a human in the loop.”

Wil van der Aalst:
“If you would like to define responsible process intelligence, you can say that it is the usage of event data and process mining and AI techniques to improve processes, but now the important part comes: in such a way that it is not harmful for people. So this considers things like not violating someone’s privacy, but also things like transparency, fairness, and whether the results are accurate at all.”

Let’s take that back to the ER example.

Hospitals are using Process Intelligence to analyze delays and optimize how patients move through the system. Sounds great, right?

But what if the system learns that patients under 40 tend to recover faster and cost less to treat—and starts prioritizing them?

If we’re not careful, AI starts making decisions that look efficient… but leave older patients waiting longer, even when their conditions are more serious.

That’s what Vanessa means by human-centric. The goal isn’t just to optimize care—it’s to make sure every patient gets the care they need, when they need it.

Chapter 2: Why Do We Need Responsible PI?

Why can’t we just assume our algorithms are neutral if the data is solid?

Vanessa Candela:
“We want to optimize workflows in hospitals to ensure efficiency, but we have to do that while we ensure patient data privacy and ethical treatment. So if you have process intelligence, the people leveraging the outcomes need to be able to trust that process. And if you don't show you're being responsible with how you leverage that PI—or who you design it for—you won't gain the trust of users and stakeholders. That makes it very difficult to be successful.”

In our Emergency Room example, that trust is everything.

If a tool tells a nurse to discharge an older patient early but doesn’t explain why—it breaks trust.

If clinicians don’t trust the system, they won’t use it. Or worse, they’ll follow it blindly.

Either way, you lose the value PI is meant to create.

Responsible PI is about building systems that people understand, trust, and can challenge when needed.

Chapter 3: The FACT Framework

To build trust into systems, we use a simple but powerful guide: the FACT framework.

Fairness, Accuracy, Confidentiality, Transparency.

Let’s walk through each.

Fairness

Vanessa Candela:
“Ethical AI and analytics means ensuring that AI-driven decisions and processes are unbiased, fair, and explainable. Avoiding harm by monitoring for potential unintended consequences.”

Let’s say your model prioritizes patients who typically have shorter stays. But those patients are overwhelmingly younger. Even if unintentional, the system now disadvantages older adults.

Fairness means identifying and correcting those hidden biases.

Accuracy (and Simpson’s Paradox)

Wil van der Aalst:
“People who see process intelligence results need to interpret them correctly. Diagnostics can be very misleading if people do not understand exactly what they mean.”

Take a hospital dashboard. It shows that patient wait times are dropping—great. But a deeper look reveals it’s only because lower-risk patients are being prioritized, while higher-risk ones wait longer.

That’s optimizing for the wrong thing—speed without context.

Wil van der Aalst:
“If you base analysis on incomplete or biased data, you may get inaccurate conclusions. Accuracy means understanding the full context of the process. It’s not just about raw numbers—it’s about how those numbers relate to how the process actually works.”

And that brings us to Simpson’s Paradox:

Wil van der Aalst:
“You may find that in every study program, female students outperform males. But when you combine all the data, it looks like males are doing better overall. That’s Simpson’s Paradox—the reversal of group-level trends when data is aggregated. It’s not only possible, it happens.”

The lesson: aggregate stats hide nuance.

Wil van der Aalst:
“It’s very dangerous—and often incorrect—to look only at aggregate statistics. You need to examine data in much more detail. Otherwise, you risk drawing the wrong conclusion.”

Confidentiality

Imagine you’re in the ER again. The hospital’s PI system is pulling data from visit history, insurance records, zip code, even socioeconomic status. Maybe your ethnicity.

You didn’t even know.

That’s where confidentiality—and data minimalism—comes in.

Vanessa Candela:
“Privacy laws require organizations to collect the smallest amount of data they need for a specific purpose—and that purpose must be permitted by whoever owns the data. But PI thrives on large, diverse datasets. So how do you balance the need for insight with data minimization?”

It’s not about bad intent. It’s about informed use.

Vanessa Candela:
“When you’re talking about personal data and laws like GDPR or CCPA, you face a dilemma: is it ethical or legal to collect massive amounts of data to optimize productivity—when you’re balancing that against what you’re actually allowed to use it for?”

Vanessa Candela:
“From a legal perspective, if you over-collect data, you can end up out of compliance. That’s the challenge—data minimization versus operational efficiency.”

Transparency

Transparency means doctors don’t just get a recommendation—they get the reasoning behind it.

Vanessa Candela:
“We talk about transparency and accountability—providing clear explanations for how decisions are made by intelligent systems and offering visibility into those automated processes so that people understand and can trust the systems they’re relying on.”

Wil van der Aalst:
“People should be able to understand PI diagnostics. If you don’t know how the result was produced, you may completely misinterpret it. Transparency is understanding the pipeline—what data was used, what was left out, and what transformations took place.”

Vanessa Candela:
“If a process mining tool flags a bottleneck, but the user doesn’t know what’s behind it, they can’t act on it. Transparency helps them understand the cause—and make better decisions.”

Chapter 4: Why We Still Need Humans

AI can be brilliant. But it can also be wildly wrong. That’s why the final safeguard in any responsible system isn’t code. It’s people.

Imagine a PI tool recommends early discharge for a patient flagged as low-risk. But a nurse notices something subtle the system didn’t catch.

That’s where human oversight matters.

Vanessa Candela:
“People talk about hallucinations with GenAI—they’re real. That’s just one reason we need someone in the loop. AI can dramatically increase productivity and accuracy, but I don’t think we’ll see a time when we don’t need a human in the loop. It builds trust.”

Vanessa Candela:
“GenAI can provide broader context and help people think differently. But there still has to be a human to interpret and apply that insight.”

Wil van der Aalst:
“You must understand how a tool produced its output. If you don’t, you may misinterpret the result. Transparency enables sound judgment.”

Vanessa Candela:
“Humans should be overseeing really critical decisions. Systems should complement people—not replace them.”

Chapter 5: From Ethics to Execution

This is what Responsible Process Intelligence is about:

Not just speed. Not just scale.

But fairness, accuracy, confidentiality, and transparency.

Vanessa Candela:
“If you have process intelligence, the people using it need to trust it. If you don’t show that you’re using it responsibly—or designing it for the right users—you won’t earn their trust.”

That’s the irony.

To make the most of Process Intelligence, we don’t just need better algorithms.

We need to design systems that earn trust.

That’s how we protect privacy, improve decisions, and yes… save lives.

Because when it comes to high-stakes decisions, what we really want is simple:

We want to trust the process.

Listen to Trust the Process Podcast

Next time, we’ll show you what happens when that trust fuels transformation - at scale. We’re heading to BMW, where thousands of employees are already using PI.

From ethics… to execution.

Because when processes work, everything works.

Image Credit: Photo by Andrik Langfield on Upsplash