Your change initiative hit 80% adoption in six weeks. Congratulations. Now ask yourself: will it still be there in six months?
Because adoption rates don’t tell you whether change actually stuck. They tell you whether people logged in.
The Adoption Illusion
I’ve watched this play out dozens of times. An organization launches a new system, a new process, a new way of working. The adoption curve looks great. Leaders feel confident. Then you check back at month six and the initiative has quietly collapsed. People drifted back to workarounds. The old behaviors won. And nobody knows exactly when that happened.
Here’s the brutal truth: high adoption early doesn’t predict sustained change. Only 29% of organizations actually use the metrics they claim to follow (McKinsey). More than half of leaders can’t tell you whether their recent changes actually worked. And 50% struggle to set well-defined measures of success in the first place.
But here’s the flip side: organizations with effective measurement infrastructure see 143% ROI on change initiatives versus 35% for organizations without it. That’s a four-fold difference. Which means this isn’t just about data collection. It’s about whether your change actually drives value.
The problem isn’t the metric. The problem is we’re measuring the wrong thing.
What You’re Actually Measuring (And Why It Matters)
Most organizations track adoption. Completion rates. Training attendance. Tickets opened. These are easy to count. But they don’t tell you whether change stuck.
There are actually three levels to consider, and they build on each other.
Level 1: Change Management Performance. Was the plan executed? Did we communicate clearly? Did we provide the right training? Did we manage resistance effectively? This is about the quality of the change process itself.
Level 2: Individual Performance. Are people using the change? Are they proficient? Are they applying what they learned? This is where adoption lives — but proficiency is what matters, not just usage.
Level 3: Organizational Performance. Did business outcomes actually improve? Did productivity increase? Did quality improve? Did we retain the people we needed to retain? This is the actual outcome that justifies the change in the first place.
Most organizations measure Level 1 heavily and Level 2 superficially. Level 3? Rarely in ways that connect back to the change initiative.
The Kirkpatrick Model reinforces this hierarchy. Level 1 is reaction (were people satisfied?). Level 2 is learning (did they absorb it?). Level 3 is behavior (did they apply it?). Level 4 is results (did business outcomes improve?). The New Kirkpatrick Model reverses the sequence: start with the results you need, then design backwards to the behaviors, learning, and reactions that drive those results.
This matters because most change measurement starts at the bottom and never reaches the top. Organizations are excellent at counting who attended the training and who rated it highly. They’re terrible at connecting that to actual behavioral change and business impact.
And there’s a critical environmental factor that Kirkpatrick Partners emphasize: the Performance Environment. Even a perfectly designed change initiative fails if the organizational environment — the culture — doesn’t support it. Psychological safety, leadership modeling, resource availability — these environmental conditions determine whether learning transfers to behavior. Ignoring the environment is like measuring how well someone learned to swim in a classroom and wondering why they struggle in the ocean.
The problem: if you only measure Levels 1 and 2, you miss the signal about whether any of this actually mattered. You end up celebrating completion rates while the actual change dies quietly in the hallway.
Behavioral Indicators: What People Actually Do
Here’s where I’m going to challenge the typical metrics list.
When organizations say “embrace change” or “adopt the new process,” those aren’t measurable. They’re aspirational. And you can’t manage what you can’t measure.
What you need are observable behavioral indicators. These are concrete, specific, and verifiable.
In my experience, the behavioral shifts that matter are:
- Leaders communicating openly about why the change happened, what it means, and what’s next. You can measure this: communication cadence, message clarity, leader visibility during implementation.
- Employees surfacing concerns without fear. In cultures where people are afraid to push back, resistance goes underground. You can measure this: anonymized pulse survey responses, town hall questions, cross-functional discussions.
- Cross-functional collaboration increasing. New processes often require people from different teams to work together. You can measure this: project team composition, meeting patterns, information sharing across boundaries.
- Experimentation rather than rigid adherence. Change is messy. Teams that try, learn, and adjust are more successful than teams that treat the new way as scripture. You can measure this: rapid testing cycles, iteration speed, failure tolerance (not punishing experimentation that didn’t work).
These require different measurement methods: 360-degree feedback, direct observation of team dynamics, pulse surveys with open-ended questions. It’s more labor-intensive than counting logins. But it gives you signal about whether the culture is actually shifting.
Psychological Safety: The Leading Indicator Nobody’s Watching
Psychological safety is the leading indicator nobody’s watching.
Amy Edmondson’s research shows that teams with high psychological safety perform five times better than teams without it. Not four times. Five.
Psychological safety is the belief that you can speak up, disagree, admit mistakes, and ask for help without fear of embarrassment or negative consequences. It’s not about being nice. It’s about whether the environment is safe enough for people to be honest.
Here’s why this matters for change: people won’t adopt a change they have concerns about if they don’t feel safe surfacing those concerns. They’ll comply on the surface and resist quietly. Or they’ll quit.
You can actually measure psychological safety. The Psychological Safety Index (PSI) is seven statements on a seven-point scale. It takes five minutes to administer. And the data is remarkably predictive.
But here’s the critical warning: don’t turn PSI into a KPI target with a goal. “We want 7.5/10 psychological safety by Q3” misses the point entirely. Psychological safety isn’t something you optimize for public consumption. It’s something you diagnose to understand how your team is actually functioning, then you adjust leadership behavior and organizational systems to improve it.
Measure it. Learn from it. Act on it. But don’t gamify it.
The 6-12 Month Reality Check
This is where the conversation shifts from launch metrics to sustainability metrics.
Success isn’t go-live. Success is sustained human adoption at month six and month twelve.
I’ve seen organizations that look phenomenal at three months and are back to old behaviors at nine months. So you need to build sustaining mechanisms — and measure whether they’re actually working.
The four sustaining mechanisms:
1. Reinforcement systems. Are new behaviors reinforced in routine processes? If people slip back to the old way and nobody notices or corrects, the new way disappears. You can measure this: how often is the new process actually used in standard workflow? Are there checkpoints that catch when people revert?
2. Capability maintenance. Do people retain skills at three months, six months, twelve months? Initial training doesn’t stick without reinforcement. You can measure this: competency assessments over time, error rates, manager observations of skill application.
3. Environmental alignment. Do systems, tools, and processes actually support the new way of working? If the old system is easier to use, people will use it. You can measure this: system usage data, workaround frequency, time spent in different workflows.
4. Leadership continuation. Are leaders still visibly committed? Attention matters. When you move on to the next initiative, employees know the change didn’t actually matter. You can measure this: leadership communication frequency, investment in maintaining capability, whether new hires receive the training.
The measurement cadence matters too. Weekly or bi-weekly tracking for the initiative team (are we on track?). Monthly or quarterly health checks on behavioral and cultural metrics. Periodic enterprise-level measurement of actual business outcomes (did we move the needle?).
A Practical Framework: Putting It Together
Here’s how to structure this so it’s not overwhelming.
Step 1: Define success first. Before you launch, work with sponsors, subject matter experts, and affected populations to define what success actually looks like. Not “80% adoption.” Something like: “Teams are consistently using the new process within two weeks of launch, error rates drop by 40% by month four, and people report understanding the business reason for the change.”
Step 2: Build a measurement dashboard that combines multiple signal types. Adoption metrics (easy to track, low insight). Behavioral indicators (harder to track, high insight). Cultural health signals (requires listening). Business outcomes (the only thing that ultimately matters).
Step 3: Track at multiple time horizons. Launch metrics (are we executing?). Thirty-day snapshot (early adoption patterns). Ninety-day deep dive (are people proficient?). Six-month and twelve-month reviews (has this stuck?).
The data backs this up. Organizations that measure compliance with change initiatives meet or exceed objectives 76% of the time versus 24% that don’t. And programs with effective metric tracking are 7.3 times more likely to succeed overall (McKinsey).
That’s not a coincidence. Measurement forces clarity. Clarity drives execution.
Cultural Health Signals: The Metrics Hiding in Plain Sight
Beyond behavioral indicators and psychological safety, there’s a set of metrics your organization already collects that can tell you whether change is taking hold — if you know where to look.
Retention patterns. If you’re losing people at a higher rate in departments going through change, that’s signal. Not all attrition is bad — some people genuinely aren’t a fit for the new direction. But a spike in departures from your strongest performers? That’s the culture rejecting the change.
Exit interview themes. I’m always amazed how few organizations mine their exit interviews for change-related feedback. People are far more honest on the way out than they are in engagement surveys. If you’re hearing themes about unclear direction, poor communication, or feeling left behind — that’s data about your change effort, not just about individual departures.
Absenteeism and engagement trends. Declining engagement scores in change-affected teams are an early warning system. This isn’t about one bad quarter. It’s about trend lines. If engagement is dropping six months into a change initiative, something’s wrong with how the change is being experienced — even if adoption numbers look fine.
Leadership alignment signals. Is messaging from senior leaders consistent? Are leaders at every level modeling the desired behaviors? Are they dedicating time and resources to the change, or have they moved on to the next shiny initiative? Inconsistency across the leadership team is one of the fastest ways to undermine change, and you can track it.
These aren’t exotic metrics. Most organizations already have this data. They just don’t connect it to their change efforts. When you do, you get a much richer picture of whether change is actually embedding into the culture or just sitting on the surface.
What You’re Optimizing For
Here’s the shift I want you to make in your thinking.
You’re not trying to hit an adoption number. You’re not trying to check boxes on a training checklist. You’re trying to answer one question: Did people’s behavior actually change, and is the culture supporting it?
The organizations that get the most value from change aren’t measuring how many people showed up to training or how many people clicked the “agree” button. They’re measuring whether behavior changed in ways that matter. They’re checking whether the culture has shifted to support the new way as normal. They’re verifying that business outcomes actually improved.
I’ll leave you with this: the difference between organizations that measure effectively and those that don’t is a 4x ROI gap (143% vs. 35%). Programs with effective metric tracking are 7.3 times more likely to succeed. That’s not a rounding error. That’s the difference between a change that transforms your organization and one that evaporates by next quarter.
Stop counting logins. Start measuring what actually changed.
This article is part of gothamCulture’s Change Management & Culture series. For more on measuring organizational culture directly, see How to Measure Organizational Culture. To assess your organization’s readiness for change, see AI Culture Readiness Assessment.
Source: gothamCulture – gothamculture.com
