Why Your Post-ERP Process Owners Don't Own Anything
The accountability gap nobody maps in the cutover plan. How to find it, name it, and fix it in 30 days.
Read →I work inside regulated utilities after the go-live — when performance hasn't stabilized, exception queues are still growing, and the team is running the new system like the old one. That's the gap I close.
Director · SAP IS-U/S4 · Customer Ops · P2P/AP · Regulated Industries

Real metrics from regulated utility transformations. This is what's possible when process design meets execution discipline.
Not a firm. One operator with a specific skill set for utilities in transformation.
Designing the governance layer that makes transformation stick — process owners, cadences, escalation paths, and accountability structures.
Mapping, cleaning, and standardizing processes from field to finance. P2P, AP, customer billing, premises — wherever the friction is.
Building the metric trees, KPI dashboards, and leader routines that turn data into decisions — not dashboards into presentations.
Getting field teams and back-office staff to actually use what you built. Not awareness sessions — behavioral adoption with measurement.
Practical automation scoping: what to automate, in what order, with what controls. SAP, Power Automate, and Celonis experience.
Every engagement produces a metric. Here are three.
Midwest utility · Gas & Electric · 1.2M customers
Manual validation created billing delays across 40,000+ moves per month.
SAP IS-U · Power Automate · Process mapping
Multi-state utility · Shared services · 8 states
Post-ERP cutover left $8M in unapplied cash and 900+ aging AP items.
SAP S/4HANA · Power BI · RACI redesign
Integrated utility · Operations + Finance
Executives had dashboards. Nobody was acting on them within the decision window.
Power BI · Celonis · Leader routines
Written for operators, not audiences.
The accountability gap nobody maps in the cutover plan. How to find it, name it, and fix it.
Read →Cycle time without segmentation tells you nothing actionable. Here's the split that changes the conversation.
Read →A 3-step intake that saves six months of rework. Most automation fails at process definition, not technical execution.
Read →Write a 200-page report and disappear
Staff junior analysts as your primary contacts
Pitch strategy without owning execution
Count a project "done" at go-live
Call a rollout successful without 90-day metrics
Stay hands-on from diagnostic to post-launch measurement
Work directly with your operators, not around them
Deliver process designs that survive contact with real crews
Define success before we start and prove it when we finish
Measure adoption at 90-day intervals and show the math
I've spent my career inside regulated utilities during the hardest part — the 18 months after a major transformation when the consultants leave and the real work begins. I fix what doesn't work, install what was missing, and build the systems that make performance measurable.
My background spans customer operations, shared services, P2P/AP, SAP implementations, and post-ERP stabilization. I work directly with operators, not around them.
I fix what doesn't work, install what was missing, and build the systems that make performance measurable.
— Josh Karpinski, Process to Proof
15 minutes. No pitch. Just a direct conversation about what you're working through.
Book a CallOr email: josh@processtoproof.com
Advisory, fractional leadership, and accelerated workshops for utilities in transformation. No retainer required to start.
Each engagement is scoped before it starts. No open-ended retainers, no scope creep.
For: Utilities 6–18 months post-ERP or mid-transformation with stalled outcomes
Most engagements identify $500K–$2M in recoverable value in week one.
For: Teams where SAP is live but nobody owns the end-to-end process
Process ownership gaps are the #1 reason post-ERP performance plateaus.
For: Leaders with dashboards that don't drive decisions
Operational decisions that used to take 2 weeks happen in 48 hours.
For: Utilities wanting to automate but unsure where to start safely
Avoid the two failure modes: automating broken processes and under-scoping controls.
I configure, test, and validate the workflows that field crews use every day. I've worked inside the system, not around it.
Every engagement follows the same four-phase structure. Tight timelines, clear owners, measurable progress at each step.
15 minutes. I listen to the problem and tell you honestly whether I can help.
Defined outcomes, timeline, and access requirements. No open-ended scope.
Direct work with your team. Weekly check-ins on progress vs. target metrics.
Your team can run it. Documented outcomes. No dependency on me to maintain it.
Not "Discover → Design → Deliver." I stay through measurement. You see results before I leave.
Pick what fits the problem. Mix them when needed.
Lightweight thinking partner. No project team required.
Embedded 1–3 days/week. Doing the work alongside your team.
Accelerated team alignment on a complex problem.
Yes. I've worked alongside KPMG, EY, Accenture, and boutique firms. I typically fill the gap between strategy and execution that large firms don't staff for — the day-to-day operational accountability layer. My work and their work are complementary.
That's the norm, not the exception. I work with your team, not around them. The goal is always to leave a system your people can run without me at the end of the engagement.
Access to three things: the process data (whatever state it's in), the people who run it day-to-day, and a sponsor with decision authority. Everything else I figure out as I go.
My strongest results are in regulated utilities — gas, electric, water, combined. I occasionally work in adjacent regulated industries (telecom, public sector) where the operational profile is similar and the post-ERP challenge is the same.
Every engagement starts with a 15-minute scoped conversation. No pitch, no deck.
Book a Call
Measured outcomes from regulated utility engagements
Every engagement produces a measurable result. No case studies without numbers. No numbers without context.
Click any card to see the full breakdown.
Midwest utility · Gas & Electric · 1.2M customers
Manual validation created billing delays across 40,000+ moves per month, generating complaint spikes and write-offs.
Manual premise validation was creating billing delays across 40,000+ moves per month. The 23-step process had 11 non-value-added steps, generating downstream complaint spikes and adjustment write-offs.
Multi-state utility · Shared services center · 8 states
Post-ERP cutover left $8M in unapplied cash, aging AP with 900+ items, and daily vendor escalations to the CFO.
Post-ERP cutover left $8M in unapplied cash, aging AP with 900+ items over 90 days, and daily vendor escalations to the CFO's office. Standard triage was not working.
Integrated utility · Operations + Finance · 5 regions
Regional leaders had dashboards but no structured routine for acting on them. Systemic issues went undetected for weeks.
Regional leaders had access to dashboards but no structured routine for acting on them. Systemic issues went undetected for 2–3 weeks before reaching leadership attention.
Southwest utility · Water & Gas · SAP S/4 go-live
30 days post-go-live, billing exceptions were running 3x expected volume, causing revenue delays and audit exposure.
30 days post-go-live, billing exceptions were running 3x expected volume, causing revenue recognition delays and audit exposure heading into quarter close.
Multi-state electric utility · Regulatory + Operations
Each state had its own reporting format and manual process. Compliance team spent 60% of capacity on formatting, not analysis.
Each of 12 states had its own reporting format, timeline, and manual process. Compliance team was spending 60% of capacity on formatting and data pulls, not analysis or risk management.
Municipal utility · Finance + Customer Operations
Leadership knew automation was possible but had no framework for deciding what to automate first, in what order, with what controls.
Leadership knew automation was possible but had no framework for evaluating where to start. Previous attempts had automated broken processes, creating new problems instead of solving old ones.
Josh doesn't do deck work. He shows up, finds what's broken, and fixes it with the people who have to live with the result.
— VP Operations, Midwest Utility (name withheld)
We had the dashboard. We didn't have the cadence. Josh installed both and left us something we could actually run.
— Director of Finance, Multi-State Utility
Process, performance, adoption, and automation — written for operators, not audiences.
The accountability gap nobody maps in the cutover plan. How to find it, name it, and fix it in 30 days.
Read →Cycle time without segmentation tells you nothing actionable. Here's the split that changes the conversation.
Read →A 3-step intake that saves six months of rework. Most automation fails at process definition, not technical execution.
Read →Why 90% of change programs mistake awareness for behavior change — and what the 10% do differently in utilities.
Read →The post-transformation performance plateau is predictable and preventable. Here's the pattern and how to break it.
Read →A weekly cadence isn't a meeting. It's a decision system. How to design one that works at the director and VP level.
Read →Templates and frameworks from actual engagements.
A three-tier framework connecting board-level outcomes to frontline metrics your teams actually control.
Download PDF →Daily, weekly, and monthly meeting rhythms that keep post-ERP operations from sliding backward.
Download PDF →Find out who actually owns your processes — before your next audit does. Includes a 10-question scoring rubric.
Download PDF →Score your processes across five dimensions before committing to a build. Includes a sequencing guide.
Download PDF →No cadence. No filler. Just direct notes when something is genuinely useful.
No spam. Unsubscribe anytime.
Director · Utilities Transformation · SAP IS-U/S4 · Customer Operations · P2P/AP
I've spent my career inside regulated utilities during the hardest part — the 18 months after a major transformation when the consultants leave and the real work begins. I don't do strategy decks. I fix what doesn't work, install what was missing, and build the systems that make performance measurable.
My background spans customer operations, shared services, P2P/AP, SAP S/4HANA implementations, and post-ERP stabilization. I work directly with operators, not around them. The goal is always to leave something the team can run without me.
Each step builds on the previous one. Skip any of them and the outcome doesn't hold.
What does success look like in 90 days? What can't we change? These two questions save weeks of wasted effort.
Map from strategic outcome to operational input. Find where the data breaks. That's where the problem is hiding.
Metrics without routines are just dashboards. Build the cadence that makes the data actionable at every level.
Automate what's stable and high-volume. In that order. Not the other way around.
Leading enterprise-wide process ownership design, KPI systems, and operational adoption programs for regulated utilities. Focus on SAP post-go-live performance and governance.
One of the World's Largest Regulated Utilities
Executive liaison across a global CIO portfolio spanning enterprise transformation, shared services governance, and technology-enabled business change. Operated at C-suite level across cross-functional initiatives and organizational change programs.
Managed $105M annual billing operations scope, installed compliance reporting across 12 states, directed post-ERP stabilization across 5 operational regions.
Redesigned premise validation workflow — reduced billing cycle time 11 days, cut manual steps 75%+. Embedded with field and billing operations teams.
Resolved $8M unapplied cash backlog, restored on-time payment rate to 96%, cleared 900+ aging AP items within 60 days of engagement.
Identified $1.4M in automation ROI across 34 candidate processes. Delivered first 3 automations in 90 days with zero control failures in year one.
If the problem sounds familiar, 15 minutes will tell us whether it's worth a longer conversation.
Book a CallReach out directly. I respond to every message and I'll tell you honestly if it's a fit.
No agenda required. Just a conversation about what you're working through.
Schedule Now →"I take 4–6 engagements per year. If the problem is real and the timing is right, I'll tell you."
— Josh Karpinski
Within 24 hours on business days. If it's urgent, email directly at josh@processtoproof.com.
A real operational problem, someone with authority to act, and access to the data and people who run the process. I don't need everything figured out — I need access and a sponsor.
Yes. Standard practice for all engagements. Sent at the point of proposal, before any detailed scope discussion.
Last updated: March 2026
Currently mid-engagement on a large-scale SAP S/4HANA migration for a Midwest gas distribution utility. Focus is on work order management and field crew scheduling — specifically the gap between system-designed workflows and how crews actually operate. This is where transformations fail: the system expects one workflow, but crews have been doing the job a different way for 15 years. My job is to bridge that gap before go-live, not after.
Developing a utility-specific adoption measurement framework. The goal: give operations leaders a 90-day readiness score before go-live, not after. Right now, utilities measure adoption by login counts and transaction volumes. That's output measurement, not outcome measurement. I'm building a framework that measures operator confidence, process adherence, and issue resolution speed — the metrics that actually predict whether a transformation will stick.
Studying how utilities are approaching AI in operations — specifically predictive maintenance scheduling. My take so far: the data quality problem comes before the AI problem. You can have the best ML model in the world, but if your asset condition data is garbage, your predictions will be garbage. Most utilities are still manually logging asset inspections. Until that changes, AI won't move the needle.
Reading: "The Toyota Way" — revisiting lean manufacturing principles for application in utility field operations. There's a direct parallel: factory floor is to manufacturing what the field is to utilities. Same principles. Standardized work. Continuous small improvements. Respect for frontline workers. Most utilities try to solve field problems with software. The answer is usually better processes.
Same patterns, different vendors, different utilities. A gas utility fails on SAP adoption the same way an electric utility fails on AMI. A water utility struggles with MDM the same way another struggled with CCB. I'm building a taxonomy of failure modes so utilities can see their challenge coming before they hit it. The patterns are identifiable. The solutions are predictable. But nobody talks about them publicly, so every utility thinks they're the only one failing.
The gap between what technology vendors promise and what utility operators actually need. It's wider than most executives admit. Vendors show 10-minute process improvements in their demo. That's real. But what about training? Resistance management? Organizational readiness? Change fatigue? None of those show up in the vendor pitch. And yet they're why half the implementations plateau post-go-live. Executives need to be honest about this gap when they're evaluating systems.
Not taking on new engagements until Q3 2026. Current project runs through June. I'm booking calls only to discuss potential Q3+ starts. If you have something time-sensitive, this may not be the right time.
I speak to operators, not at them. Every talk is built from real project data — not slides from a research library.
Inquire About Speaking →
Each presentation is customized to your audience and their specific challenges.
5 adoption failure patterns from gas, electric, and water utility rollouts — with real case data from SAP and AMI implementations.
How to instrument a transformation before it starts — so you're not chasing KPIs at go-live. Includes the 90-day adoption checkpoint framework.
Hands-on session for ops leaders deploying SAP, AMI, MDM, or CCB. Covers resistance patterns, training design, and what actually gets field crews to change behavior.
Operational Excellence World Summit
Orlando, FL
SAP for Utilities Conference
Annual utilities track
Edison Electric Institute Leadership Summit
Executive leadership track
Contact me for references and video clips.
I read your agenda and attendee list before building a single slide.
Every claim comes from a documented project outcome. I name the metric, the system, and the timeframe.
Your audience will recognize the problems I'm describing because they're living them.
Tell me your audience, your theme, and your date — I'll tell you if I'm the right fit.
Start a Conversation →The accountability gap nobody maps in the cutover plan. How to find it, name it, and close it.
The go-live celebration ends on a Friday. By the following Wednesday, the first escalation lands in someone's inbox. Nobody is sure whose problem it is. So it goes up.
That escalation pattern — the one where routine operational issues reach directors and VPs because nobody below them has clear authority — is not a people problem. It's a design problem. And it's almost never addressed in the cutover plan.
Most organizations assign process owners during implementation. They appear in RACI charts. They sign off on documentation. They attend the go-live readiness review.
Then the system goes live, the consultants roll off, and something quietly breaks: the process owner has a title but no operating authority. They can't change system configuration. They can't resolve exceptions without escalating. They have no metric tied to their name. And nobody calls them when something goes wrong — because nobody knows who that is.
This isn't a failure of intent. It's a failure of design. Real ownership requires three things, and most post-ERP environments deliver one.
Three things required for real process ownership:
Pull the last 60 days of escalations. For each one, ask: at what level was this resolved, and at what level should it have been resolved? The gap between those two answers is your ownership gap.
In most post-ERP environments, 40–60% of director-level escalations are decisions that should have been made two levels down. The director isn't wrong to resolve them — they have to. But every one of those escalations is a signal that ownership is missing somewhere.
Once you have the escalation map, match each gap to a process. Then ask three questions about each process:
If the answers are "unclear," "nothing formal," and "nobody regularly" — you've found the gap. Name it. Assign it. Build the forum. That's the work.
Exception queues grow. Your best operational leaders spend time resolving things that should never reach their desk. Key-person dependency builds quietly — one person carries institutional knowledge that isn't documented anywhere. Then they leave, or get promoted, and performance doesn't erode gradually. It stops.
The 18-month post-ERP performance plateau isn't inevitable. It's what happens when the governance layer doesn't get built. The platform works. The processes are documented. But nobody actually owns them.
Key Takeaway
Process ownership isn't a role description. It's authority + a metric + a forum. All three, or it doesn't hold.
Related Download
A 10-question scoring rubric to find ownership gaps before your next audit does.
Download PDF →Cycle time without segmentation tells you nothing actionable. Here's the split that changes the conversation.
Your executive team reviews average cycle time every month. It's green. Has been for three months. Then an auditor, or a customer, or a regulator surfaces a category of transactions that's been sitting at 45 days. The average was hiding it.
This is the most common performance reporting problem I see in utility operations. The metric isn't wrong — average cycle time is a real number. But it's answering a question nobody should be asking. The question isn't "how long does it take on average?" The question is "where are things breaking, and why?"
When you average cycle time across all transaction types, a high-volume clean category masks the performance of a low-volume exception category. Your 3-day average looks fine. Your 45-day exception queue is invisible.
In billing operations specifically: a utility billing 800,000 accounts per month might have 96% of bills processing in 2–3 days. The remaining 4% — exceptions from field reads, premise disputes, returned mail — can sit for weeks. The average looks fine. The 4% creates the complaint calls, the regulatory exposure, and the revenue delay.
Segment your cycle time into at least three buckets: clean-path transactions, exception transactions, and aged exceptions (anything over a defined threshold — 15 days, 30 days, whatever your SLA requires).
Report all three. Trend all three. Assign owners to each bucket.
Now your executive conversation changes. Instead of "cycle time is 4.2 days," it becomes: "Clean-path is 2.1 days — stable. Exception cycle time is 28 days — up from 22 last month. Here's the category driving it and here's the root cause."
That second conversation is one your executives can act on. The first one just generates a nod and moves the agenda forward.
You don't need new software. You need a transaction code that distinguishes exception status, and a weekly pull that segments by that field. In SAP environments, this exists — it's usually already there. What's missing is the habit of reporting it segmented.
Start with one process. Pick the one with the most escalations last quarter. Add the segmentation. Run it for two reporting cycles. The story it tells will be more useful than anything the average has been telling you.
Key Takeaway
Aggregate cycle time is a reporting metric. Segmented cycle time is a diagnostic. One tells you where you are. The other tells you what to do.
Related Download
A three-tier framework connecting executive outcomes to the operational inputs teams actually control.
Download PDF →Most automation fails at process definition, not technical execution. A 3-step intake that saves six months of rework.
The automation vendor presents a compelling demo. The bot runs the process in 90 seconds. Leadership approves the project. Six months later, the automation is processing exceptions faster than humans ever did — which means the downstream queue is now growing three times as fast.
This is what automating a broken process looks like. The technical execution was fine. The process wasn't ready.
Industry estimates put automation failure rates at 50–70% for initiatives that don't deliver ROI in year one. The failure pattern is consistent: the process being automated was not stable, not documented to the actual current-state, or not owned by anyone accountable for the outcome.
Automating an unstable process doesn't fix it. It locks it in. Whatever workarounds, exceptions, and undocumented steps exist in the manual process get encoded into the automation — and they run at machine speed.
The three-step intake below isn't a gate to slow things down. It's the work that makes automation actually land.
One documented version. No workarounds. Exceptions resolved at source, not routed around. If your process documentation shows what's supposed to happen but your team follows different steps — the process isn't ready. Document what's actually happening first, then standardize it.
Run it for 60–90 days under measurement. Track cycle time, error rate, and exception volume. Understand the variance before you try to automate it. If you don't know the baseline, you won't know if the automation improved anything — or broke it.
Someone's name is on the outcome. They can approve exceptions. They have a metric. They have a cadence. Without an owner, the automation runs but nobody governs it — and when something breaks, it breaks without anyone noticing until the queue is six weeks long.
If you have three YES answers, scope it. Build the automation with a rollback plan and monitoring layer. Go live.
If you have two or fewer YES answers — stop. The 60–90 days you spend stabilizing and measuring will save six months of remediation on the back end. In any regulated or customer-facing environment, remediation after a bad automation rollout costs significantly more than getting the process right first.
Key Takeaway
Stable. Measured. Owned. All three, before you scope. Skipping steps doesn't save time — it shifts cost from implementation to remediation.
Related Download
Score your processes across five dimensions. Includes a sequencing guide and scoring interpretation.
Download PDF →Why 90% of change programs mistake awareness for behavior change — and what the 10% do differently in utilities.
The training is done. Every user completed the modules. Attendance was 94%. The go-live readiness checklist says green.
Sixty days later, half the team is still running manual workarounds alongside the new system. The help desk queue has tripled. And the operations manager is spending three hours a day answering questions the training was supposed to cover.
This is not a training failure. The training did exactly what training does — it told people what changed. The problem is that knowing what changed is not the same as changing how you work.
Training creates awareness. Adoption is a change in behavior that holds under pressure — when the system is slow, when volume spikes, when the supervisor isn't watching. That's a different problem. It requires a different approach.
In utility operations specifically, the pressure points are predictable: billing cycles, outage response, seasonal volume peaks, regulatory reporting windows. Adoption fails when people revert to what's comfortable under those conditions. And they will revert — unless the new way is visibly easier and has visible consequences for not using it.
What the 10% do differently
They design adoption before go-live, not after. They treat it as an operational discipline, not a communication plan. Specifically:
Before any go-live, ask this: does every frontline supervisor know what "correct" looks like in the new process, and do they have a way to see when their team isn't doing it?
If the answer is no — the training is complete but the adoption system isn't built yet. That's the work that determines whether the investment in the new system actually holds.
Adoption isn't a program you run once. It's the operational infrastructure that keeps performance from sliding back. It needs to be designed, owned, and measured — the same as any other process.
Key Takeaway
Training tells people what changed. Adoption is when they actually change. Design the adoption system before go-live — not after the help desk queue builds.
Related Download
Find where adoption is failing before the audit finds it. 10-question scoring rubric.
Download PDF →The post-transformation performance plateau is predictable and preventable. Here's the pattern — and how to break it.
Month 6 after go-live: the system is stable, the project team is celebrating, and the final engagement report is getting polished. Metrics look solid. The consultants roll off.
Month 12: Exception queues are 20% higher than they were at go-live. The dashboards that the project built are no longer being updated. The process documentation is already out of date because three workarounds have crept back in. The senior manager who owned the new process has moved to a different initiative.
Month 18: Leadership is discussing whether to bring the consultants back.
I've watched this pattern play out across multiple utility implementations. It's not inevitable. But it will happen if you don't design against it specifically.
Three things go wrong simultaneously in the 6–12 months after a major implementation:
During the project, the consulting team runs the cadence. They own the escalation path. They resolve blockers. When they leave, that structure leaves with them. Unless it was deliberately designed into the internal operating model, the team defaults to how they ran things before.
The people who built process knowledge during the project — who know why decisions were made, what the exceptions mean, how the system was configured — get promoted, reassigned, or leave. If that knowledge wasn't documented and distributed, it walks out the door with them.
The reporting that existed during the project was run by the project. When the project ends, the reporting often ends with it. Teams revert to whatever dashboards were easiest to maintain — which are usually the ones that tell them less than they need to know.
This is post-ERP stabilization work, and it needs to start during the implementation — not after. Specifically:
Build the governance layer before you need it. Before the consulting team rolls off, define the internal cadence that replaces their operating rhythm. Who owns each process, what metric they own, what forum they report into, and what their escalation path is. Document it. Activate it 60 days before project close — not on the day the consultants leave.
Assume the key people will move. Document ownership with enough detail that a new person can step into it in 30 days. If the only person who can run a process is the person who was on the implementation team, you have key-person dependency — not a sustainable operation.
Keep the metrics running. The reporting you built for the project needs to survive the project. Assign it to someone internal. Simplify it if needed — but keep it. The moment you stop measuring, you stop knowing. And in regulated utilities, what you don't know finds you eventually.
Key Takeaway
The 18-month plateau isn't a system problem. It's a governance problem. The platform works. What's missing is the internal operating model that keeps performance from drifting back.
Related Downloads
A weekly cadence isn't a meeting. It's a decision system. How to design one that works at the director and VP level.
The team has the data. The dashboards are built. The weekly meeting is on the calendar. And yet the same issues show up every week, decisions get deferred, and the most capable people in the room spend the first 20 minutes figuring out what they're actually there to decide.
The meeting isn't the problem. The absence of a decision system is.
Most organizations have calendars. They have recurring meetings, standing calls, monthly reviews. What they don't have is a system that connects the meeting to the decision to the action to the result — at every level of the organization, in the right cadence for that level.
A real cadence has four layers, each operating at a different frequency and serving a different decision type:
Before you schedule anything, answer these four questions for each level of the cadence:
A cadence designed around these questions runs differently from one that got scheduled because "we should probably have a weekly." It takes the same amount of time on the calendar. It produces a different outcome in the business.
Leader routines are infrastructure. They hold performance between the big initiatives, through the transitions, and after the consultants leave. Design them with the same rigor you'd design any other system your operation depends on.
Key Takeaway
A meeting is on the calendar. A cadence is a decision system. Know what decision each level exists to make — and who has the authority to make it in the room.
Related Download
The four-level operating rhythm. Weekly huddles to QBRs — structured for decision velocity.
Download PDF →The accountability gap nobody maps in the cutover plan — and how to close it in 30 days.
Every SAP go-live I've worked in had a RACI. None of them survived contact with month two.
The accountability gap isn't in the cutover plan — it's in what happens to decisions after the system goes live. Exception queues grow. Escalations pile up at the VP level. The people listed as process owners are doing their old jobs plus the new system plus the escalations nobody else wants to touch. The RACI said they owned it. Nobody defined what owning actually means in the new environment.
Here's what I consistently find: the problem isn't that people don't care. It's that no one has told them what a decision looks like in the new process, what they're authorized to close without escalating, or when to escalate versus absorb. So they default to the safest path — surface everything upward and wait.
1. Map every exception to a named owner — not a team, a person. If an invoice dispute sits in a queue, one person's name should appear next to it. Not "AP Team." Not "Billing Operations." One person who knows they own it and has the standing to close it.
2. Run a one-hour authority calibration. Sit down with each process owner and go through the ten most common decisions in their domain. For each one: can they make it without asking someone above them? If the answer is usually no, you don't have a process owner — you have a relay station. Fix the authority, not the person.
3. Build a weekly 20-minute exception review into the operating rhythm. Not as a status meeting. As a decision session. The question isn't "where do things stand?" It's "what needs a decision this week and who's making it?" That single structural change — a named time to close open items — does more for process ownership than any RACI update.
"Process ownership doesn't fail because people don't care. It fails because nobody defined what owning actually means in the new environment."
The system is live. The question is who's running it.
If this is the problem you're navigating, 15 minutes is enough to know if there's a fit.
Cycle time without segmentation tells you nothing actionable. Here's the split that changes the conversation.
Aggregate cycle time is one of the most misleading numbers in operations.
When you average processing time across everything — routine transactions, disputes, regulatory-flagged items, intercompany corrections — you get a number that satisfies the executive dashboard and tells you nothing about where the problem actually lives. The average looks reasonable. The underlying distribution is hiding a crisis.
In most post-ERP environments I've walked into, 15–20% of transactions are driving 80–90% of cycle time variance. That 15% is your real problem. But the aggregate metric makes it invisible — because the 80% of transactions that process cleanly are pulling the average down toward something that sounds acceptable in a leadership meeting.
Separate your clean-path transactions from your exception-path transactions. Run each as its own baseline, tracked separately, owned by different people. This one structural change typically surfaces the real problem within the first two weeks of measurement.
Set separate baselines for clean path and exception path. Your clean-path cycle time tells you how well the system works when it works. Your exception-path cycle time tells you how well your team manages complexity. They require different interventions.
Track migration between paths. What's creating new exceptions? Which process steps are generating exceptions at a higher rate after go-live than before? That's your real root cause conversation.
Build the leadership conversation around exception rate, not average cycle time. The question is: what percentage of transactions are falling off the clean path, and is that percentage going up or down? That is an actionable metric. Average cycle time is not.
"If your executives are looking at a single cycle-time number, they're making decisions on the average of two completely different processes."
The fix isn't a better dashboard. It's a better question.
15 minutes is enough to know if the metric tree approach fits your environment.
Most automation fails at process definition, not technical execution. Three questions that prevent six months of rework.
Every automation engagement I've seen fail had one thing in common: they started with the tool, not the process.
The technology isn't the hard part. Scoping a bot, building a workflow, connecting a system — those problems are solvable. The failure mode I see consistently is organizations that automate a process they haven't actually stabilized, documented, or owned. The bot runs perfectly. It's running the wrong process at scale.
1. Can you describe the intended process in five steps or fewer — without workarounds? If the answer is no, you don't have a process to automate. You have a collection of adaptations that people have built around a broken process. Automating that encodes the workaround at scale. Fix it first.
2. Do you have 60 days of stable transaction data with a known error rate? If the error rate is still moving — still being discovered, still being explained — the process isn't stable enough to automate. You need a baseline that holds. Otherwise you're building to a moving target and you won't know it until month three of the deployment.
3. Is there a named human who owns the exception path? Automation will generate exceptions. Every system does. If no one owns those exceptions before you automate, you will have a bot that creates work nobody manages. In regulated environments, that work doesn't disappear — it accumulates until someone gets escalated to, usually at the wrong level.
Stabilize the process. Measure it for 60–90 days. Then automate against a known baseline with a documented rollback plan. In that sequence, automation accelerates performance. In reverse, it locks in the dysfunction and makes it faster.
"I've never walked into a post-go-live stabilization where the technology was the primary problem. It's always the 47 process steps that were documented, signed off, and never actually followed."
If you can't answer all three questions above with confidence, you're not ready to scope automation. That's not a failure — it's a scoping decision that saves six months of rework.
A 15-minute conversation can tell you where you actually stand on process readiness.
Why 90% of change programs mistake awareness for behavior change — and what the 10% do differently in utilities.
I've sat in rooms where the go-live plan had 40 hours of training scheduled and zero hours of behavior change designed.
Training tells people what the new system does. Adoption is about changing what people do — by default, under pressure, when the old way is faster or easier. Those are different problems, and treating one as the solution to the other is how you get systems that work and processes that don't.
They design for the moment of friction, not the moment of learning. Training happens before go-live. The adoption challenge happens in week two, when the system is live, the old process is still muscle memory, and the new process has a step that nobody warned them about. The organizations that succeed identify those friction points in advance and build reinforcement exactly there — not in the training room.
They track behavior, not completion rates. Training completion is a vanity metric. It tells you who showed up. It tells you nothing about what they do on Tuesday morning when the system behaves differently than the simulation. The metrics that matter: are exception-path decisions being made at the right level? Are workarounds appearing in the transaction data? Is escalation volume going up or down?
They make the front-line manager accountable for adoption, not the project team. The project team will leave. The manager stays. If the manager isn't invested in the behavior change — if they're still allowing workarounds, still absorbing exceptions that should be resolved at the point of origin — then training was theater. The accountability for adoption belongs with the person who can reinforce it daily.
"Adoption is a leadership design problem. Not a training scheduling problem."
If your post-go-live adoption plan ends with the training schedule, you're measuring the wrong thing.
Most adoption gaps are fixable. A 15-minute conversation tells us where yours actually is.
The post-transformation performance plateau is predictable and preventable. Here's the pattern — and how to break it.
The performance plateau isn't a mystery. It's predictable. And most organizations walk straight into it.
I've watched the same pattern play out across multiple post-ERP environments in regulated utilities. It unfolds on a timeline that is remarkably consistent, regardless of utility size, system, or region.
Months 1–3 post go-live: Stabilization focus. Everyone is engaged. Issues get surfaced and resolved quickly because the project team is still in the building and escalation paths are clear.
Months 4–6: Performance improves. Key players — the ones who actually knew how decisions were made — start getting pulled onto the next project. The institutional knowledge starts to leave the room.
Months 9–12: The people who knew why certain decisions were made are gone. Workarounds start accumulating. Exception queues grow. Nobody escalates it because there's no clear escalation path anymore — the project team inbox doesn't exist.
Months 15–18: A VP asks why performance is back to where it was before the project. The answer is almost always the same: nobody installed the governance layer that makes transformation stick.
A KPI system tied to named process owners, not project milestones. When the project closes, the metrics shouldn't close with it. If the performance system lives inside the project plan, it disappears when the project does. Build it into operations from month one.
A leader cadence designed to surface drift — not celebrate go-live. The operating rhythm that sustains performance is not the same as the governance cadence that managed the implementation. Build a post-go-live operating model before the project team leaves, not after performance drops.
A documented exception escalation path that doesn't end at the project team inbox. Exceptions will happen. The question is where they go when they do. If the answer is unclear when the project team leaves, you've guaranteed the plateau.
"The consultants built the system. Someone has to run it."
The 18-month problem is preventable. But you have to design for it in month one, not month fifteen.
If you're inside the plateau right now, 15 minutes will tell us how far in — and what moves first.
A weekly cadence isn't a meeting. It's a decision system. How to design one that holds at the director and VP level.
I've seen organizations spend $50M on a transformation and $0 on the meeting cadence that would make it hold.
A weekly operating rhythm is not a scheduling preference. It is a decision system. The organizations that sustain performance after major change share one visible pattern: leaders at every level have a cadenced touchpoint where they see the same data, make the same decisions, and close the same loops — week after week. That pattern is not accidental. It was designed.
Weekly huddles for frontline operations. Biweekly team reviews for managers. Monthly multi-process reviews for directors. Quarterly business reviews for executives. Each level surfaces what the level below couldn't resolve. The cadence is a transmission system — issues that belong at a higher level move up, and decisions that should be made below don't get escalated unnecessarily.
Not status updates — decision points. The question that opens every meeting isn't "where do things stand?" It's "what needs a decision this week and who's making it?" That reframe changes the entire dynamic. Status updates generate more status updates. Decision points generate closed items.
Someone has to be accountable for the meeting running — not just for showing up to it. In every cadence that holds, there is one person who owns the agenda, tracks open items from the prior week, and closes the loop on what was committed. When that accountability is diffuse, the meeting degrades into a status call within 60 days.
"If your transformation has a go-live date but no operating cadence, you've installed a system without a way to run it."
Leader routines fail when they're treated as optional. They hold when they're treated as the mechanism by which the organization makes decisions — because that's exactly what they are.
15 minutes is enough to understand your current rhythm and where it's breaking down.