Chapter 3: The FACT Framework
To build trust into systems, we use a simple but powerful guide: the FACT framework.
Fairness, Accuracy, Confidentiality, Transparency.
Let’s walk through each.
Fairness
Vanessa Candela:
“Ethical AI and analytics means ensuring that AI-driven decisions and processes are unbiased, fair, and explainable. Avoiding harm by monitoring for potential unintended consequences.”
Let’s say your model prioritizes patients who typically have shorter stays. But those patients are overwhelmingly younger. Even if unintentional, the system now disadvantages older adults.
Fairness means identifying and correcting those hidden biases.
Accuracy (and Simpson’s Paradox)
Wil van der Aalst:
“People who see process intelligence results need to interpret them correctly. Diagnostics can be very misleading if people do not understand exactly what they mean.”
Take a hospital dashboard. It shows that patient wait times are dropping—great. But a deeper look reveals it’s only because lower-risk patients are being prioritized, while higher-risk ones wait longer.
That’s optimizing for the wrong thing—speed without context.
Wil van der Aalst:
“If you base analysis on incomplete or biased data, you may get inaccurate conclusions. Accuracy means understanding the full context of the process. It’s not just about raw numbers—it’s about how those numbers relate to how the process actually works.”
And that brings us to Simpson’s Paradox:
Wil van der Aalst:
“You may find that in every study program, female students outperform males. But when you combine all the data, it looks like males are doing better overall. That’s Simpson’s Paradox—the reversal of group-level trends when data is aggregated. It’s not only possible, it happens.”
The lesson: aggregate stats hide nuance.
Wil van der Aalst:
“It’s very dangerous—and often incorrect—to look only at aggregate statistics. You need to examine data in much more detail. Otherwise, you risk drawing the wrong conclusion.”
Confidentiality
Imagine you’re in the ER again. The hospital’s PI system is pulling data from visit history, insurance records, zip code, even socioeconomic status. Maybe your ethnicity.
You didn’t even know.
That’s where confidentiality—and data minimalism—comes in.
Vanessa Candela:
“Privacy laws require organizations to collect the smallest amount of data they need for a specific purpose—and that purpose must be permitted by whoever owns the data. But PI thrives on large, diverse datasets. So how do you balance the need for insight with data minimization?”
It’s not about bad intent. It’s about informed use.
Vanessa Candela:
“When you’re talking about personal data and laws like GDPR or CCPA, you face a dilemma: is it ethical or legal to collect massive amounts of data to optimize productivity—when you’re balancing that against what you’re actually allowed to use it for?”
Vanessa Candela:
“From a legal perspective, if you over-collect data, you can end up out of compliance. That’s the challenge—data minimization versus operational efficiency.”
Transparency
Transparency means doctors don’t just get a recommendation—they get the reasoning behind it.
Vanessa Candela:
“We talk about transparency and accountability—providing clear explanations for how decisions are made by intelligent systems and offering visibility into those automated processes so that people understand and can trust the systems they’re relying on.”
Wil van der Aalst:
“People should be able to understand PI diagnostics. If you don’t know how the result was produced, you may completely misinterpret it. Transparency is understanding the pipeline—what data was used, what was left out, and what transformations took place.”
Vanessa Candela:
“If a process mining tool flags a bottleneck, but the user doesn’t know what’s behind it, they can’t act on it. Transparency helps them understand the cause—and make better decisions.”