The relationship between effort and time is weakening
AI-assisted development tools - code completion, refactoring agents, test generation, documentation summarisation - are changing how work flows through teams. Not evenly and not predictably.
Two developers working on similar tasks can now experience radically different outcomes depending on their familiarity with AI tools, how well the problem is framed, and whether the work aligns with patterns AI handles well.
The result is not "everything is faster," but rather variance has increased. Estimation models that rely on stable throughput struggle under this variability.
This does not invalidate estimation outright, but it does weaken time-based precision as the primary goal.
Grooming is shifting from preparation to sense-making
AI is already capable of drafting tickets, refining acceptance criteria, and proposing implementation approaches. In some teams, this work is quietly happening before humans ever meet.
What AI does not do well is explain why this work matters now, what organisational constraints apply, or which historical risks are relevant but undocumented.
As a result, grooming sessions are increasingly valuable not for shaping the ticket, but for aligning on meaning, assumptions, and risk. Teams that treat grooming as a mechanical pre-planning step are finding diminishing returns.
Velocity is losing trust - softly, not loudly
Very few teams are openly abandoning velocity. Many are, however, mentally discounting it.
AI introduces bursts of productivity that distort historical averages. Velocity still looks precise, but teams increasingly know it is fragile. This creates a subtle but important gap: numbers are reported, while confidence is discussed elsewhere.
This gap is rarely visible in tooling - but it shows up clearly in conversations at every level.

