I use the word "mechanism" a lot. In newsletters about action items, hiring pipelines, ticketing systems1 , first-90-day transitions1. Readers have started asking: what exactly do you mean by that?

Fair question. I've been using it as if the definition were obvious. It isn't.

The word comes from my time at Amazon, where "build a mechanism" is the response to almost every recurring problem. Not "try harder." Not "send a reminder." Build a mechanism.

It took me a while to understand what that actually meant. Once I did, I started spotting incomplete mechanisms everywhere. Including my own.

1 upcoming newsletters

The Sentence That Changed How I Think About Systems

At Amazon, every time something went wrong, the response was the same. Not from one leader. From every leader, in every room. "Build a mechanism to ensure we're never here again."

Not "be more careful." Not "pay more attention." Build a mechanism.

It wasn't a management technique. It was muscle memory. The culture had internalized it so deeply that asking for good intentions instead of a mechanism would have felt like asking someone to clap with one hand. You'd get a strange look.

The implication: if you're relying on people remembering, caring, or trying harder, you don't have a solution. You have a hope. And hope is not a strategy.

Jeff Bezos has a line that's become an adage across Amazon:

"Good intentions don't work. Mechanisms do."

Jeff Bezos

When you ask for good intentions, you're not asking for change. People already had good intentions. The good intentions were there when the problem happened.

What a Mechanism Is Not

A mechanism is not a tool. This is where most people stop.

A spreadsheet tracker is a tool. A weekly review meeting is a tool. A checklist is a tool. A Slack channel is a tool. A dashboard is a tool.

Tools are necessary. But a tool sitting in a Google Drive folder, used by some people sometimes, inspected by nobody — that's not a mechanism. That's furniture.

I've built dozens of tools in my career that died within weeks. Beautiful templates. Elegant dashboards. They all had the same problem: I designed the artifact and assumed the behavior would follow. It didn't.

The action item tracker I wrote about in a previous newsletter? The spreadsheet was the easy part. What made it work was everything around the spreadsheet: the rule that you enter it during the meeting (not after), the rule that owners update before the review (not during), the fact that the meeting starts with the tracker (so skipping it means everyone notices), and the visible strikethrough history that makes date-slipping impossible to hide.

The spreadsheet was the tool. The rest was the mechanism.

The Four Components

A mechanism has four parts. If any one is missing, it degrades.

1. The Tool

This is the thing itself. The tracker, the template, the meeting format, the classification system. It transforms inputs into outputs. It needs to be specific enough that someone can use it without asking questions, and simple enough that compliance isn't a burden.

The test: Can someone new use this tool correctly on their first attempt, with minimal instruction?

If the tool requires perfection from humans, it will fail. Design for the realistic human — the one who's busy, distracted, and juggling six other priorities.

2. Adoption

Who uses this tool? Not "the team." Which specific people, in which specific situations, at which specific moments?

Adoption is where most mechanisms die. You build a great tool, send an email announcing it, and assume people will start using it. They won't. Not because they disagree with it, but because their existing habits are strong and your new tool isn't yet part of their muscle memory.

I learned this the hard way. I'd set up a review cadence — weekly check-ins on a tracker. I'd run it diligently for two weeks, maybe three, while the problem was fresh. Then the pressure would ease. I'd let one review slip. Then another. Within a month the tracker was collecting dust and I was back to firefighting.

The problem wasn't the tool. The problem was that my adoption depended on my personal motivation, and motivation is a terrible forcing function. It fluctuates. It fades. It gets crowded out by the next crisis.

What finally worked: I stopped treating recurring coordination as optional. I put monthly syncs on the calendar with a standing invite. Non-negotiable. I told the team: even if we have nothing to discuss, we meet and say we have nothing to discuss. That felt absurd at first. But the absurdity was the point. The meeting existing was the forcing function, not the agenda.

Adoption requires this kind of structural commitment. A meeting that always happens. A format where the tracker is always the first agenda item. A system gate that won't submit unless the required fields are filled. The best forcing functions become muscle memory. They stop feeling like compliance and start feeling like "how we do things here."

Force adoption through structure, not through authority. Authority fades. Structure persists.

And here's the part nobody warns you about: getting there is brutal. Everybody groans. Everybody resists. They don't want to change the path. You spend enormous energy making the thing work, and for weeks it feels like you're pushing a boulder uphill for no reason. Then one day things smooth out. The mechanism starts saving more energy than it costs. But building that muscle memory is the hard part, and most people quit before they reach the crossover.

3. Inspection

This is the component most people forget entirely. And it's the one that matters most.

An un-inspected mechanism is a broken mechanism. Always. Without exception.

Inspection starts with a question most people skip: How will I know when something is going wrong? Not "who checks if it's working." How will I know when the mechanism itself is being gamed or bypassed?

You have to think like someone who wants to get around the system. Because they will. Not out of malice, but out of convenience, time pressure, or habit. Your job is to anticipate the failure modes and design inspection around them.

In a ticketing system, people will submit tickets without photos because taking a photo is extra effort. So you make photos compulsory. The system won't accept a ticket without one. In a training program, people will mark modules complete without absorbing content. So you require evidence of application, not just completion. In a CAPA process, people will write shallow root cause analyses. So you have a bar raiser randomly audit them — someone outside the immediate team who checks whether the analysis actually reached a root cause or stopped at a convenient symptom.

But catching people who skip steps is the easy part of inspection. The harder part is catching when the mechanism itself is producing the wrong outcome.

When I was head of operations at Practo, I managed customer service teams directly. Shift managers and floor leads reported to me. We built a mechanism around first response time — how fast agents picked up and resolved customer issues. Good tool. Strong adoption. The team optimized hard.

Too hard.

Agents started listening to half a sentence, assuming the problem, and vomiting out a scripted response. They'd close the ticket before the customer finished talking. Speed was up. Empathy was gone.

The moment I knew we'd failed: a customer called to say thank you. He'd had a wonderful service experience and wanted to tell us. Before he could finish his sentence, the agent apologized for the inconvenience and offered a discount.

A thank-you call. Met with an apology and a discount. Because the mechanism had trained the team to assume every call was a complaint and race to close it.

The mechanism was working perfectly by its own metrics. First response time was excellent. But the original objective — answer fast, treat with empathy, be efficient while maintaining care — was dead. We'd optimized the metric and killed the mission.

That's an inspection failure. Not because people were gaming it maliciously. Because the inspection was measuring compliance with the tool instead of alignment with the objective.

4. Iteration

Mechanisms aren't permanent. The context changes. The team changes. The problems change. A mechanism that worked six months ago might be creating overhead today without delivering value.

Someone needs to own the question: Is this mechanism still serving the problem it was built to solve? Does it need to evolve? Does it need to die?

Without an iteration owner, mechanisms become bureaucracy. They survive long past their usefulness because nobody has the authority or the incentive to question them. You end up with teams filling trackers that nobody reads, attending reviews that don't drive decisions, following processes that solve last year's problems.

Why This Matters for Everything I Write About

Every operational framework I share in this newsletter is a mechanism, not a tool.

The action item system isn't the spreadsheet. It's the spreadsheet + the rule that you record it live + the rule that you update before the meeting + the meeting that starts with it + the visible history that prevents hiding. Remove any one of those, and you have a spreadsheet that dies in two weeks.

The ticketing system (upcoming article) isn't the ticket database. It's the database + the classification matrix + the closure standards + the RCA requirement + the metric that tells you whether you're actually improving. Without the forcing functions and inspection loops, it's just a complaint box.

Every time I write "build a mechanism," I mean: build something that works even when people are busy, distracted, or don't care as much as you do. Something that doesn't rely on your personal follow-up to function. Something that survives your absence.

The Lever Question

One more thing that took me years to learn: mechanisms fail when you pull the wrong levers.

If people don't believe something matters, no amount of process will fix it. That's a beliefs problem. You need to change how people think, not what they do.

If people know what to do but don't do it, training won't help. That's an incentives problem. Their rewards don't align with the behavior you want.

If nobody owns the problem, adding metrics won't help. That's a structure problem. Someone needs clear authority and accountability.

The hierarchy runs from big levers (mental models, goals, org structure) to small levers (processes, metrics, resources). The mistake I see most often: reaching for small levers to solve big problems. More training. More dashboards. More people. When the actual issue is that nobody believes this matters, or that the incentives point the wrong way, or that no one owns it.

Resources — more people, more tools, more budget — should be the last lever you consider. Not the first.

Mechanism Health Check

Use this when building a new mechanism or diagnosing why an existing one is underperforming. Walk through each component honestly.

The Problem

Question

Your Answer

What's the recurring problem this mechanism solves? (Must be recurring, not one-off.)

Who's impacted? What's the cost of not solving it?

What does "solved" look like? How would you measure it?

If you can't answer these crisply, stop. No mechanism can fix an unclear problem.

The Tool

Question

Your Answer

What does the tool do, specifically?

Can someone new use it correctly on their first attempt?

Does it require perfect human compliance to work? (If yes, it won't hold.)

Are there parts of the tool nobody uses? (Simplification opportunity.)

Adoption

Question

Your Answer

Who specifically must use this? (Names or roles — not "the team.")

What forcing function ensures they use it? (A meeting, a format, a system gate.)

What happens when someone doesn't use it? Is that visible?

Who will resist? Why? What's your plan for them?

Inspection

Question

Your Answer

How will you know when something is wrong?

How will you know when the mechanism is being gamed or bypassed?

Is the mechanism producing the results you built it for? (If followed faithfully but the problem persists, the mechanism is wrong.)

Iteration

Question

Your Answer

Who owns improving this mechanism? (Unowned mechanisms rot.)

When was the last time it was changed? (If never, it's probably stale.)

Has it gotten simpler or more complex over time? (Good mechanisms simplify.)

Is it still solving the problem it was built for, or has it become overhead?

The Completeness Test

Two things must be true:

  1. The tool transforms inputs into the outputs you need.

  2. The complete process — tool, adoption, inspection — maintains and improves itself over time.

If either is false, you don't have a mechanism yet. You have a draft.

Where This Framework Falls Short

I'd be lying if I said building the four components guarantees success. It doesn't.

The Practo first-response-time mechanism had all four components. Working perfectly. And I was solving the wrong problem. I thought I was being smart — while 90% of teams don't even build proper mechanisms, I had all four components humming. Level two thinking. Operational sophistication.

But the real answer was level three: eliminate the need for customer support in the first place. Don't make the call faster. Make the call unnecessary. The best service is no need for service.

I thought I'd cracked it. I hadn't even asked the right question.

This framework gets you past where most teams are stuck. It gets you to 80, maybe 90 percent of the way. The remaining distance depends on whether you're solving the right problem to begin with. Whether your root cause analysis went deep enough. Whether you had the discipline to ask "why does this need to exist at all?" before building the machine to make it efficient.

That last part is closer to art than framework. I'm still journeying toward it.

Not the only way. Probably not even the best way. Just one practitioner's version that worked.

~Discovering Turiya@work@life

1 

Reply

Avatar

or to participate

Keep Reading