You can't "do" revenue. You can't "do" customer satisfaction. You can't "do" complaint reduction.

You can only do specific activities that create those results.

Those activities are your inputs. The results are your outputs.

Most dashboards I see track outputs obsessively. "Revenue this week." "NPS score." "Complaint rate."

Then people sit in rooms asking: "Why is revenue down?" or "How do we improve NPS?"

Wrong question.

You can't fix an output directly. You can only fix the inputs that drive it.

The Core Distinction.

Controllable Input Metrics are metrics where you can directly take action. You can assign someone to do something on Monday and measure whether they did it by Friday.

Output Metrics are the results of your inputs. The lagging indicators that tell you how the business is performing. You report them, you track them, but you don't directly control them.

The link is everything: Every input must connect to a specific output. You must be able to complete this sentence:

"We drive [input metric] to move [output metric]."

If you can't complete that sentence, either the input doesn't belong, or you haven't thought through the causal chain.

The Forcing Questions.

Before you add any metric to your dashboard, it needs to pass a forcing question. If it fails, it doesn't belong in that category.

For Customer Experience Metrics:"Would my customer know this number without us telling them?"

If a customer walked out of your store or finished interacting with your service, would they know this metric from their own experience?

  • Wait time—yes, the customer stood in line.

  • Mystery audit scores—no, the customer doesn't know you audit yourself.

For Output Metrics:"Can I control this directly on Monday, or is it a result of other things?"

If you can't make this number move by assigning a single task, it's an output.

  • Store revenue—result of footfall × conversion × basket size.

  • Training sessions conducted—you directly control this.

That second one is an input, not an output.

For Input Metrics:"Can I assign this to someone on Monday and measure whether they did it by Friday?"

If you can't assign it to a person with a deadline, it's not actionable enough.

  • "Improve customer experience" fails—what specific action?

  • "Complete 10 merchandising updates" passes—assignable and measurable.

The Two Dashboards.

I maintain two distinct sets of metrics. Mixing them defeats the purpose.

P0 Metrics (Executive Dashboard)

  • Maximum 10 metrics total (target 5-7)

  • Shows overall function health

  • When a P0 breaks, you drill into P1 to diagnose

P1 Metrics (Operating Dashboard)

  • No limit—include everything you need to operate

  • Used to manage teams on a daily/weekly basis

  • The diagnostic layer that sits beneath P0

Think of P0 as the instrument panel in an airplane cockpit—the critical gauges the pilot must watch. P1 is the full engineering diagnostics system—detailed data the maintenance crew uses.

The pilot doesn't need all the engineering data during flight, but when something goes wrong, that's where you look.

The Three Sections of P0.

Section 1: What Did Customers Experience?

Metrics customers directly experience. Wait time. Delivery time. Report turnaround. Google rating.

What fails: Mystery audit scores (customer doesn't see your internal audits), training pass rates (customer doesn't see your scorecards), internal quality metrics.

Section 2: Output Metrics (Business Results)

The outcomes that emerge from multiple activities. Store revenue. Complaint rate. Error rate.

These are important, but you don't discuss them at length—you report them briefly, then focus on the inputs that drive them.

What fails: Anything you directly control. Training sessions conducted, stores launched, audits completed—these are inputs disguised as outputs.

Section 3: Controllable Input Metrics

The specific activities you're driving to move your outputs. Training completion %. Audits conducted. Preventive maintenance done. Merchandising updates completed.

What fails: Strategy theater. "Improve customer experience" isn't a metric. "Complete 15 store audits" is.

The Critical Rule.

Every Input Must Link to a Specific Output.

For every metric in Section 3, you must be able to complete this sentence:

"We drive [INPUT METRIC] → to move [OUTPUT METRIC in Section 2]"

No orphan inputs. If an input doesn't connect to an output you care about, either find the connection or delete the metric.

How to Discover Your Input Metrics.

Method 1: Drill Down.

Start with an output you care about. Ask: "What factors drive this metric?"

Store Revenue (OUTPUT)

└─ What drives revenue? → Footfall × Conversion × Basket Size

└─ What drives footfall? → Marketing, Google rating, word of mouth

└─ What drives Google rating? → Customer experience, cleanliness, wait time

└─ Store cleanliness audits conducted (INPUT—assignable!)

Keep drilling until you hit activities that are directly assignable. Those are your inputs.

Method 2: Exception Investigation.

When something unexpectedly spikes (good or bad), investigate what caused it. If it's repeatable, measure the activities that drove it.

When something breaks, ask: "What metric should we have been tracking that would have predicted or prevented this?"

Method 3: Frontline Data.

Talk to your frontline staff. Mine your customer support data. The problems that surface most frequently often point directly to input metrics you should be tracking.

The Evolution.

Your metrics aren't static. They evolve as you learn what actually drives your business.

  1. Add an output metric you care about

  2. Seek controllable inputs that you believe drive that output

  3. Drive the inputs and watch whether the output moves

  4. Keep inputs that work—drop inputs that don't move the output despite being driven

  5. Repeat

Over time, you build an increasingly accurate causal model of your business. You stop guessing and start knowing.

Common Mistakes.

  • Tracking an output with no linked inputs

  • Putting an input in the outputs section (use the forcing question)

  • Vague metrics like "improve quality" (must be specific, assignable, measurable)

  • Too many P0 metrics (ruthlessly prioritize—less is more at the executive level)

  • Discussing outputs at length in reviews (report briefly, then focus on inputs)

  • Guessing at root causes (say "I don't know" if you haven't investigated)

The Shift.

Focus 80% of your discussion and effort on inputs.

If you move the right inputs in the right direction, the outputs will follow.

The goal isn't to stare at outputs and wonder why they're not moving. The goal is to identify the levers you actually control, pull them systematically, and build a causal understanding of what creates results.

Stop asking "why is revenue down?"

Start asking "which inputs are we not driving?"

Not the only way. Probably not even the best way. Just one practitioner's version that worked.

What's your experience? How do you separate what you control from what you measure?

~Discovering Turiya@work@life

Reply

Avatar

or to participate

Keep Reading