AI, Estimation, and the Quiet Rewiring of Software Delivery

AI has not broken agile planning overnight. But the pressure it introduces is making many familiar practices feel unexpectedly brittle.

The shift underway

Over the last decade, software teams have steadily refined how they plan, estimate, and deliver work. Agile rituals became familiar. Velocity charts stabilised. Grooming and planning settled into predictable rhythms.

AI has not broken this overnight. What it has done is introduce enough pressure into the system that many of those practices are starting to feel brittle.

This article is not a declaration that "estimation is dead" or that "AI replaces planning." Those claims are easy to make and hard to prove. Instead, it is a reflection on what is visibly changing already, where teams are quietly adapting, and where we need more evidence before drawing conclusions.

What is already changing

The relationship between effort and time is weakening

AI-assisted development tools - code completion, refactoring agents, test generation, documentation summarisation - are changing how work flows through teams. Not evenly and not predictably.

Two developers working on similar tasks can now experience radically different outcomes depending on their familiarity with AI tools, how well the problem is framed, and whether the work aligns with patterns AI handles well.

The result is not "everything is faster," but rather variance has increased. Estimation models that rely on stable throughput struggle under this variability.

This does not invalidate estimation outright, but it does weaken time-based precision as the primary goal.

Grooming is shifting from preparation to sense-making

AI is already capable of drafting tickets, refining acceptance criteria, and proposing implementation approaches. In some teams, this work is quietly happening before humans ever meet.

What AI does not do well is explain why this work matters now, what organisational constraints apply, or which historical risks are relevant but undocumented.

As a result, grooming sessions are increasingly valuable not for shaping the ticket, but for aligning on meaning, assumptions, and risk. Teams that treat grooming as a mechanical pre-planning step are finding diminishing returns.

Velocity is losing trust - softly, not loudly

Very few teams are openly abandoning velocity. Many are, however, mentally discounting it.

AI introduces bursts of productivity that distort historical averages. Velocity still looks precise, but teams increasingly know it is fragile. This creates a subtle but important gap: numbers are reported, while confidence is discussed elsewhere.

This gap is rarely visible in tooling - but it shows up clearly in conversations at every level.

What is becoming more important

Shared context over shared numbers

AI performs best with context, yet most teams store context poorly: architectural decisions live in chat logs, delivery scars live in memory, and "we tried this before" rarely survives team changes.

As delivery accelerates, context becomes the limiting factor, not execution speed. Tools and rituals that help teams surface and share context gain value, even if they do not make teams "faster."

Confidence as a planning outcome

Many planning sessions still produce a single output: a commitment.

What teams increasingly need is something slightly different: Where are we confident? Where are we guessing? Where would a surprise hurt most?

This is not a call to remove commitment, but to acknowledge that confidence is uneven by design - and pretending otherwise increases risk rather than reducing it.

Alignment as a resilience mechanism

AI amplifies individual productivity. It does not automatically amplify team cohesion.

Without deliberate alignment, teams risk drifting into silent asymmetries: who relies on AI, who does not, who trusts it, who compensates for it. Over time, this affects morale, perceived fairness, and trust.

Planning and estimation rituals that surface assumptions and uncertainty play an increasingly important role in maintaining team resilience - not just delivery predictability.

Hypotheses worth watching

It is tempting to predict sweeping changes. A more useful approach is to identify hypotheses that deserve observation.

Estimation will shift from precision to calibration

Teams may continue estimating, but with greater emphasis on ranges, disagreement, and confidence rather than single-point accuracy.

Historical patterns will matter more than forecasts

As AI increases variability, past failure modes and delivery patterns may become more useful than forward-looking averages.

Planning tools will bifurcate

Some will optimise for automation and speed. Others will specialise in alignment, risk, and shared understanding. The latter may serve fewer users, but at higher strategic value.

These are plausible directions - not conclusions. Evidence will come from how teams actually adapt under sustained AI-assisted delivery, not from theory alone.

What this means for product teams

The core practices of grooming, estimation, and planning are not disappearing. They are being repurposed.

Predicting effort
Calibrating uncertainty
Optimising throughput
Managing risk and confidence
Producing plans
Producing shared understanding

Teams that recognise this shift early are not abandoning discipline. They are redefining it.

Closing thought

AI accelerates execution. It does not remove uncertainty. In fact, it often exposes it.

The teams - and tools - that succeed over the next few years are unlikely to be those that promise perfect foresight. More likely, they will be the ones that help teams see uncertainty sooner, talk about it honestly, and adapt together.

That may not sound revolutionary. But in practice, it changes everything.

Ready to try estimation with context?
Ibis Flow captures the reasoning behind your estimates - not just the numbers. Surface similar past work, flag risks during estimation, and build context that compounds.