
The Problem With How We Look At Performance.
Every Monday, I see dashboards showing "average response time: 2.3 hours" or "average delivery time: 3.2 days." Teams nod, check green boxes, move on.
But there's a fundamental issue: averages hide the story of what's actually happening to your customers and operations.
Most of us weren't taught to think in percentiles. We learned averages in school, use averages in spreadsheets, and build dashboards around averages. But when you're running operations, averages can be dangerously misleading.
Understanding Percentiles (The 5-Minute Explanation).
What percentiles tell you: What experience did X% of your customers actually have?
50th percentile (median): Half your customers had better performance, half had worse
90th percentile: 90% of customers had better performance, 10% had worse
95th percentile: 95% of customers had better performance, 5% had worse
Simple example: If your 90th percentile response time is 8 hours, it means 1 out of every 10 customers waits over 8 hours.
Why this matters: That 10% often includes your most complex cases, biggest customers, or most critical situations.
The Customer Service Disaster You Don't Know You Have.
I was reviewing a support metrics deck for a friend at another company. The dashboard looked great:
Average response time: 90 minutes ✅ Target: <2 hours ✅ Status: Green ✅
But when I dug into percentiles:
50th percentile (median): 45 minutes (excellent)
90th percentile: 8 hours (terrible)
95th percentile: 18 hours (catastrophic)
Translation: While 50% of customers got lightning-fast service, 10% waited over 8 hours. These weren't random customers—they were typically the most complex cases, often from enterprise clients paying 10x more than regular customers.
The damage: Three high-value clients had already started looking at competitors. The "green" dashboard was hiding ₹2.5 crores in annual revenue risk.

