Skip to main content
Back to BlogKanban

The Death of the Estimate: Why Throughput Beats Story Points

Stop estimating. Start counting. Your forecasts will thank you.

Chris Bexon··7 min read
The best estimate is no estimate. The best forecast uses your actual data.

I want you to add up all the hours your team has spent in estimation sessions over the past year. Planning poker, refinement discussions, t-shirt sizing, relative estimation — all of it. Now I want you to ask: what did you get for that investment?

If you're honest, the answer is usually: "A number that we didn't use for anything meaningful, attached to a story that changed scope three times before it was done." Story point estimation is the most widely practised and least useful ritual in modern software delivery. And there's a better alternative that takes almost no time and produces better forecasts.

Why Story Points Waste Time

Story points were invented to abstract away the variability in human time estimates. Instead of asking "how many hours?", you ask "how complex relative to other work?" In theory, this removes anchoring bias and individual variation. In practice, it doesn't work.

  • Points get converted to time anyway. Management takes velocity (points per sprint) and divides scope (total points) to get a date. The abstraction collapses back into exactly the time estimate you were trying to avoid.
  • Estimation accuracy doesn't improve with practice. Studies show that estimation accuracy for knowledge work is roughly the same whether you spend 2 minutes or 2 hours on it. The additional time in estimation sessions doesn't improve the estimate — it just creates the illusion of rigour.
  • Points hide variability. A "5-point story" might take 1 day or 3 weeks. The single number hides the enormous range of possible outcomes, giving false confidence in plans built on those numbers.
  • Estimation debates waste refinement time. I've watched teams argue for 20 minutes about whether something is a 3 or a 5. The debate produces no useful information. The team should have spent that time understanding the work.

Throughput-Based Forecasting

The alternative is remarkably simple. Instead of estimating each item, count how many items your team completes per week. That's your throughput. Then use that data to forecast.

"How many can we do in 6 weeks?" — look at your throughput data. You've completed 5, 7, 4, 6, 8, 5 items per week over the last 6 weeks. On average, about 6/week. Over 6 weeks, probably 30-40 items depending on variability. Use Monte Carlo for a probability distribution.

"When will these 25 items be done?" — same data. At 6/week average, about 4 weeks. Monte Carlo says: 50% chance in 4 weeks, 85% chance in 5 weeks, 95% chance in 6 weeks.

Notice what we didn't do: estimate any individual item. The forecast is based on actual throughput data, not opinions about complexity. It's faster (no estimation sessions needed), more accurate (based on real data not guesses), and more honest (expressed as probabilities, not false certainty).

Right-Sizing vs Estimating

For throughput-based forecasting to work well, you need items that flow through your system in a relatively consistent time. This is where right-sizing comes in — and it's fundamentally different from estimating.

Estimating asks: "How big is this?" Right-sizing asks: "Is this small enough to flow through our system quickly?" The first question leads to debates about complexity. The second leads to practical decisions about splitting. And splitting is the only action that actually reduces risk — no amount of estimation accuracy changes the size of the work.

In practice, right-sizing is a quick gut check during refinement: "Can this be done in a few days? If not, how do we split it?" That conversation takes 2 minutes per item instead of 15, and it produces something useful — smaller items that flow faster and forecast more reliably.

A Better Question

"How many can we do?" is a better question than "how big is this?"

This subtle shift changes everything. "How big is this?" focuses on individual items and invites subjective debate. "How many can we do?" focuses on system capability and invites data-driven answers.

Stakeholders don't actually care about story points. They care about: "Will feature X be ready for the trade show?" or "How much of the roadmap can we deliver this quarter?" Throughput-based forecasting answers these questions directly, without the intermediate step of estimation that adds time and subtracts accuracy.

Practical Transition Guide

You don't need to abandon estimation overnight. Here's a gradual approach:

Week 1-4: Start Counting

Keep your current estimation process, but also start tracking throughput: how many items does the team complete each week? You'll have 4 weeks of throughput data at the end of this phase.

Week 5-8: Run Both

For the next month, generate forecasts both ways — using velocity and story points, and using throughput. Compare the accuracy. In my experience, throughput-based forecasts are as accurate or better, despite requiring zero estimation time.

Week 9-12: Shift Refinement

Replace estimation in refinement with right-sizing. Instead of "is this a 3 or a 5?", ask "can this be done in a few days? If not, how do we split it?" Use the freed-up time for deeper understanding of the work — edge cases, dependencies, technical approach — which is a better use of refinement anyway.

Week 13+: Full Throughput Forecasting

You now have 12+ weeks of throughput data. Use Monte Carlo simulation for all forecasting. Drop story points entirely. Celebrate the hours you just gave back to your team.

The Objections

"But we need estimates for budgets." No, you need forecasts for budgets. Throughput-based forecasting gives you better forecasts than estimation-based planning. Monte Carlo even gives you confidence levels so finance can choose their risk tolerance.

"But items aren't the same size." They don't need to be identical. Right-sizing makes them similar enough that counting works. And the variability that remains is captured in the Monte Carlo simulation — that's the whole point of probabilistic forecasting.

"But estimation helps us understand the work." Great — discuss the work. Understand the work. Just don't attach a number to it. The valuable part of estimation sessions was never the estimate; it was the conversation. Keep the conversation, drop the number.

In our Mastering Kanban interactive lab, you experience throughput-based forecasting firsthand — running Monte Carlo simulations on data you generate through Kanban board simulation. It's the fastest way to build confidence that this approach actually works.

Related Course

Mastering Kanban

Put these ideas into practice with hands-on, simulation-driven training.

Explore the course
CB

Chris Bexon

Founder of Genius Teams. 30 years in delivery, coaching, and transformation. PST, ICAgile, and builder of interactive training that actually works.