Process Mining in Practice
How a continuous improvement mindset turns process mining into a game‑changer
July 29, 2025

I've never been the one running operations day to day, but I have worked closely with many people who were. And I see a clear pattern: the biggest wins don't come from teams that treat process mining as just another reporting tool. They come from teams that embed process mining into how they work. Not just to analyse the past, but also to monitor operations day-to-day.
It's this shift from occasional analysis to continuous improvement that turns process mining from useful to transformative.
In this article, I want to show what that looks like in practice. The example is fabricated, but based on patterns I've seen many times before.
Setting the Scene: The New Ops Lead
Imagine you're stepping into the role of Operations Lead for a growing home services company that specialises in urgent repair work like plumbing, heating, or electrical issues. Customers can submit requests through an online portal for review. Once approved, the job is scheduled for a technician to visit the property.
You inherit a slick BI dashboard tracking the usual metrics:
- Average time from initial request to job completion
- Percentage of requests breaching the SLA (a service level agreement of 1 working day to schedule a technician after the request is approved)
- Number of requests in each status (e.g. In Review, Awaiting Approval, Scheduled, Complete)
At a glance, things look OK, but a few indicators stand out:
- SLA breaches are sitting at 12%. That's too high for comfort, but it's not clear where in the process the time is being lost.
- There's a noticeable backlog of requests under review, but again it's not clear why.
You try slicing the data by region and request type but nothing clearly points to what's driving the issues. You decide it's time to use process mining and see what the actual process flows tell you.
Step 1: Visibility
You work with your data team to connect and structure event data across three systems:
- The request submission platform
- The case management tool (for review and approvals)
- The scheduling system (for assigning technicians and logging completion)
Once it's all connected in your process mining tool, a few patterns immediately stand out:
- A flow where requests marked as approved are immediately followed by a second review and second approval
- A rework loop where some requests are sent back to review after already reaching the "awaiting technician" stage.
- A delay between "approved" and "awaiting technician" even though there is no work done between these two status updates
Now that the key patterns are visible, it's time to dig deeper.
Step 2: Analysis
You start by looking at each one in turn and work through the business impact. Does this add extra time, extra effort, or lead to avoidable mistakes? Then you look at what the process data can tell you by looking at all the other data connected with the request and events. That helps you figure out who to talk to and where to focus your investigation.
It is this combination of clear business impact, solid evidence, and targeted conversations that leads you to the real root causes.
Approval followed by another approval
You spot a recurring pattern in the process where some requests are approved, sent back to review, and then re-approved with no data updates in between.
🔍 Drilldown:
- You filter for requests that follow the path: Approved → Reviewed → Approved.
- Looking at the user IDs, you notice the first approval is always marked as auto_approver_bot, while the second approval is carried out by a named user.
- You inspect the data associated with the request and find that the submission platform calculated a high confidence score for auto-approval in all cases, well above the usual threshold for auto-approval
- It strikes you as odd that if the case truly required human review, why was it approved first?
📉 Impact: These re-approvals are adding unnecessary time. The affected requests take, on average, 0.3 days longer to reach scheduling.
🧩 Root cause: You speak to the team that rolled out the auto-approval feature. When the system launched, they added a safeguard: requests that were auto-approved below a conservative confidence threshold were still routed for manual review, just to be safe. The safeguard was never removed, even after the automated checks had proven reliable. No one had been given the responsibility for reviewing it at a later date.
Awaiting technician happens twice
You notice that in some cases, the "awaiting technician" activity shows up twice, an oddity in what should be a linear fulfilment flow.
🔍 Drilldown:
- You filter for requests where Awaiting technician appears more than once.
- You inspect the path those cases follow: Approved → Awaiting technician → Reviewed → Data updated → Approved → Awaiting technician.
- You look at the Data updated event and see that, in most cases, the contact phone number is being added or corrected.
📉 Impact: These requests take, on average, 0.5 days longer to reach scheduling. They also force reviewers and approvers to re-review and re-approve work they've already processed, increasing workload and slowing overall throughput.
🧩 Root cause: You reach out to the dispatch team and reviewers. Dispatch is sending requests back because they cannot schedule a technician without a valid phone number. The number isn't always captured at submission, and it isn't consistently validated during the initial review. As a result, the request gets all the way to the Awaiting technician stage, then bounces back to Review just to add the missing contact number.
Delay between approval and fulfilment
Every request in the process has a delay between Approved and Awaiting technician. It's not just the occasional case - the delay is present in every case, though it varies in length with some delays short enough to go unnoticed.
🔍 Drilldown:
- You look at an aggregated distribution of time between Approved and Awaiting technician, and find that the delay is sometimes as long as 6 hours.
- You inspect the two events and find they're being logged by different systems: one by the case management tool, and the other by the scheduling system.
- You double-check the timestamps in both systems and confirm the consistent lag isn't due to system clock differences
📉 Impact: This delay slows down the process before the team can even start scheduling technicians. It eats into the SLA target, pushing more requests into breach territory even when everything else is running smoothly.
🧩 Root cause: You speak to a developer familiar with the integration. They confirm that the interface between systems isn't event-driven. The scheduling system polls the case management tool for updates at fixed intervals instead of reacting instantly when a request is approved. This was the simplest path to a quick integration, and it's never been revisited as the team is working on other priorities.
All three findings have a measurable business impact, and while the dashboard showed the symptoms, only process mining gave you a structured way to pinpoint where in the process those symptoms are coming from.
Step 3: Optimisation
With root causes identified, the team rolls out three targeted improvements:
- Remove the redundant safeguard step from the auto-approval flow. If a request meets the confidence threshold, it proceeds straight to scheduling. If it falls below, it gets routed to manual review
- Update the submission form to always capture the client's phone number, and introduce a validation step that prevents approval unless all required contact information is present.
- Switch from polling to event-driven integration between the case management and fulfilment systems, triggering the next step as soon as approval is logged.
Step 4: Monitoring
Because the process data pipeline is already in place, monitoring the fixes is straightforward.
- The team tracks whether requests with high auto-approval confidence scores still include any manual review and manual approval steps.
- They monitor how many requests return to review after reaching "awaiting technician" and what updates to the data are being made. This helps them confirm that the fix is working and highlights any other fields that might be causing similar issues.
- They add an alert for any delay between approval and fulfilment trigger. Since event-based integration should be near-instant, any lag suggests the sync may not be functioning as expected.

Process mining works best when it is part of how a team operates day to day
Final Thought
The drilldowns described here (filtering paths, inspecting metadata, and connecting the dots) might seem highly manual. And traditionally, they were. But that is changing fast. AI is starting to augment this investigative process, helping teams surface unusual patterns, rank variants by business impact, and suggest likely root causes. We'll explore how that's evolving in future articles, and how Flow Myna is building this capability directly into its product.
That said, it is important to stay grounded. There will almost always be part of the puzzle that lives outside the data. When humans are involved in setting up systems and making operational decisions, no process view alone will tell you everything. But process mining will give you a fast, structured way to uncover issues, build evidence, and guide the right conversations.
In my experience, real impact comes when process mining stops being treated as a standalone tool and becomes part of how a team works. When it's embedded into daily operations and continuous improvement routines, it helps teams learn from how work actually happens. They can make changes with confidence, and monitor results as they unfold rather than weeks later when a complaint lands or when bad habits are already entrenched.
That's how you move beyond reporting to real operational intelligence, where evidence drives action and changes stick.