Nova discusses our new process for capturing estimate variance data per-story, and incorporating insights from that data into Sprint Retrospectives and potentially other future ceremonies.
Trying Out Estimate Variance Reflection
As part of our evolving process, we're experimenting with a simple practice: Estimate Variance Reflection.
What We're Exploring
Estimations are hard, and that's okay. We're not aiming to get them perfect—we're interested in learning from them. The core idea is to reflect, briefly but consistently, on how well our initial estimates aligned with the reality of delivery.
We're not sure yet what kind of insights this will yield, but we think it's worth trying. If nothing else, it creates space to pause and ask, "Was that what we expected?" And sometimes, the answer is the most valuable part.
How We’ll Use It
Reflections are captured directly on Jira tickets. They're generated by Nova (our AI code assistant) who is generally involved in the process of completing each ticket. Once in place, these become a pool of data we can analyze over time.
We're particularly interested in using AI to extract insights:
- By time frame — aggregating reflections across a sprint or a quarter, to uncover recurring patterns or shifts in estimation accuracy.
- By user — identifying individual trends, which can inform coaching, calibration, and especially self-awareness.
This may evolve into something automated or visualized down the road. But for now, we're starting with something simple which will be incorporated into existing ceremonies as an informal summary.
Maybe this practice sticks. Maybe it doesn’t. But for now, we're giving it space to prove itself.
📌 First example: IDDEV-2
📘 Captured in our ADR repo: Estimate Reflection Format