When Dashboards Lie: The Visualisation Problem in Strategic Decision-Making
Dashboards don't present reality — they present a curated, aggregated, delayed version of reality. We traced how three critical business signals were invisible to executive dashboards despite being visible to the people closest to the work.
The dashboard assumption
There’s an implicit assumption in most organisations: if it’s on the dashboard, leadership can see it. If leadership can see it, it will be acted on. If it’s acted on, the problem will be resolved.
Every link in this chain is unreliable. Dashboards show what they were designed to show — which is always a function of what someone thought mattered at the time the dashboard was built. They aggregate, summarise, and normalise. They present data at intervals (daily, weekly, monthly) that may not match the cadence of the phenomena they’re tracking. And they create a powerful illusion: the feeling of situational awareness without the substance of it.
We traced three business-critical signals through organisations and measured whether executive dashboards captured them. In all three cases, the signals were clearly visible to frontline teams and completely absent from the dashboards executives used to make decisions.
Case 1: The customer attrition signal
A B2B SaaS company tracked customer health through a dashboard showing NPS scores, support ticket volume, and usage metrics. All three indicators were green. The dashboard showed a healthy customer base.
Meanwhile, the customer success team was seeing a different pattern: key users at three enterprise accounts had stopped logging in. Not the accounts — the specific individuals who championed the product internally. Their absence meant the product was losing its advocates inside those organisations.
This signal — champion disengagement — wasn’t captured by any dashboard metric. NPS was still high (collected annually, reflecting historical sentiment). Support tickets were low (because the champions weren’t using the product enough to generate issues). Usage was within normal ranges at the account level (other users masked the champion’s absence).
Within six months, all three accounts churned. The dashboard never saw it coming because the dashboard was measuring the wrong things at the wrong granularity.
Case 2: The delivery velocity illusion
An engineering organisation measured delivery through a dashboard showing sprint velocity, cycle time, and deployment frequency. All metrics showed improvement. The dashboard told a story of increasing capability.
The reality: the team was completing more tickets by splitting work into smaller increments, reducing cycle time by cherry-picking simpler tasks, and deploying more frequently by separating deployments from feature completions. Every metric improved. Actual feature throughput — the rate at which complete, customer-valuable features reached production — had declined by 15%.
The dashboard metrics were proxies for delivery. The team optimised the proxies rather than the underlying capability. Nobody designed the dashboard to detect this — because when the dashboard was built, the assumption was that the metrics and the capability were the same thing.
Case 3: The strategic initiative gap
A financial services company tracked strategic initiative progress through a dashboard showing milestone completion, budget utilisation, and risk ratings. Seventeen initiatives showed green or amber. None showed red.
We mapped those same initiatives against their stated strategic objectives using a different methodology — assessing whether each initiative’s outputs were creating evidence that its strategic bet was paying off. The result: seven of the seventeen initiatives had no measurable connection to strategic outcomes. They were on time, on budget, and strategically disconnected.
The dashboard measured project health. Nobody had built a dashboard that measured strategic alignment. The two are not the same thing.
A dashboard that shows you what you designed it to show is functioning perfectly. The problem is that what you designed it to show may not be what you need to see.
The structural problem
These aren’t dashboard design failures. They’re symptoms of a deeper structural issue: the distance between the people who build dashboards and the decisions those dashboards are meant to support.
Dashboards are typically built by analytics or engineering teams based on requirements from management. The requirements reflect what management thinks it needs to see. What management thinks it needs to see is shaped by the metrics it currently uses. And the metrics it currently uses were designed for a previous strategic context.
The result is a measurement system that’s always one strategic epoch behind the organisation. The dashboard shows what mattered last year. The decisions that need to be made are about what matters now.
What changes this
The fix isn’t better dashboards. It’s a different relationship between measurement and decision-making:
Start with the decision, not the data. For each strategic decision the executive team makes regularly, ask: what signal would change this decision? Then check whether that signal appears on any dashboard. If it doesn’t, the dashboard is decorative.
Build in obsolescence. Every dashboard metric should have an explicit review date — a point at which someone asks whether this metric still matters, whether it’s measuring what it was intended to measure, and whether the decision it was meant to inform has changed.
Close the gap between signal and dashboard. The people closest to the work can usually tell you what’s going wrong before any dashboard can. The question is whether a mechanism exists to surface their observations to decision-makers in a form that competes with the dashboard’s authority. In most organisations, it doesn’t.