ArkoInc.
Insights/AI & Project Management
Case Study45% fewer overruns6 min read

How AI-Assisted Sprint Planning Reduced Project Overruns by 45%

A 14-week fintech delivery with three teams and a non-negotiable deadline - and how an AI project monitoring layer caught three critical risks before they became missed milestones, lifting on-time completion from 68% to 91%.

Arko IT Services ·

The project that was always two weeks behind

When a team treats sprint overruns as normal, it has an information problem, not a capability problem. The warning signs are always there before the overrun lands: velocity sliding for three sprints straight, an engineer muttering about unexpected complexity in a standup, an external API dependency moving slower than anyone estimated. The information existed. Nobody was pulling it together into a coherent early warning.

That is the gap AI-assisted project monitoring is built to close.


The context: a complex multi-team fintech delivery

The engagement was a fintech platform with three engineering teams: a core API team, a data and analytics team, and an integrations team. The project was a major platform extension with regulatory exposure, so the delivery timeline did not bend.

  • 14-week delivery timeline, externally committed
  • 3 teams, 18 engineers total, distributed across two time zones
  • 7 external dependency touchpoints
  • Complex interdependency graph: integrations team dependent on core API team at multiple points
  • Previous delivery history: 68% on-time sprint completion rate

What we built: the AI project intelligence system

Component 1: automated dependency extraction

An LLM-based parser read every Jira ticket weekly and pulled out dependencies (explicit, implicit, and external) from the natural-language descriptions and comment fields.

In the first week it found 23 dependencies that were not in the formal Jira dependency graph. Invisible risks, every one of them a blocker waiting to happen.

Component 2: velocity anomaly detection

A time-series analysis layer monitored each team's velocity against their historical baseline and the current sprint plan:

Velocity Monitor - Week 5 Report:
----------------------------------
Core API Team:
  Sprint velocity: 38 points (baseline: 52, plan: 50)
  3-sprint trend: -8% per sprint (declining)
  Projected completion: Week 17 (plan: Week 14)
  Alert: YELLOW

Integrations Team:
  Sprint velocity: 31 points (baseline: 44, plan: 42)
  3-sprint trend: -15% per sprint (significant decline)
  Projected completion: Week 19 (plan: Week 14)
  Alert: RED - immediate attention required

The week 5 report flagged a 5-week overrun in the making, which gave the team 7 weeks to do something about it.

Component 3: stakeholder communication sentiment analysis

After each meeting, an LLM processed the transcript to pull out open commitments, open concerns, and the direction of sentiment. This caught stakeholder concerns that were being raised politely but never formally escalated.

Component 4: sprint completion probability reporting

graph LR
    subgraph INPUTS["Inputs"]
        VEL[Historical Velocity Distribution]
        BACK[Remaining Backlog - estimated points]
        DEP[Open Dependencies - unresolved count]
        EXT[External Risks - flagged items]
    end

    subgraph SIM["Monte Carlo Simulation - 10,000 runs"]
        SIM1[Sprint Completion Distribution]
        P50[P50 - 50% confidence date]
        P80[P80 - 80% confidence date]
        P95[P95 - 95% confidence date]
    end

    subgraph OUT["Weekly Report"]
        DASH[PM Dashboard]
        ALERT2{Alert Level}
        RECO[Recommended Actions]
    end

    VEL --> SIM1
    BACK --> SIM1
    DEP --> SIM1
    EXT --> SIM1
    SIM1 --> P50
    SIM1 --> P80
    SIM1 --> P95
    P50 --> DASH
    P80 --> DASH
    P95 --> DASH
    P95 --> ALERT2
    ALERT2 --> RECO

What the system caught

Risk 1, week 5. Two senior engineers on the Integrations team had been quietly pulled onto a separate internal project, knocking 30% off effective sprint capacity. Undocumented, and invisible to the PM. Fixed by week 6 through reallocation.

Risk 2, week 7. Dependency extraction found 6 Integrations tickets with an implicit dependency on a Core API feature that was not yet in the Core API sprint plan. That would have been a hard blocker in week 10. It went into the Core API plan in week 8 instead.

Risk 3, week 11. Sentiment analysis on the week 11 status call flagged four statements signaling stakeholder concern that the PM had read as positive. A follow-up call revealed the stakeholder's internal timeline had moved. Managed up front instead of discovered in week 13.


The results

MetricPrevious 6-Month BaselineDuring 14-Week Project
Sprint on-time completion rate68%91%
Sprint overrun percentageavg. 28% carryoveravg. 9% carryover
Risks identified before becoming blockersretrospective only3 of 3 major risks
PM time on status reporting~8 hrs/week~3 hrs/week
Project delivered on timeN/AYes - Week 14

The 45% drop in sprint overruns is the headline number. The qualitative outcome may matter more. For the first time in this team's recent history, a major delivery landed on schedule. The stakeholder relationship recovered. And the team started trusting its own estimates again.


What this required from the PM

The PM was not replaced and not reduced. The hours the system gave back on status reporting went straight into the work AI cannot do: building relationships, reading organizational dynamics, and deciding how to frame risk for leadership.

The AI processed the information. The PM used it to manage better.

Free strategy call

Thirty minutes.
Three concrete recommendations.

We review your current technology landscape, identify your top three risks, and tell you what to do next. No deck, no commitment — just senior judgement, on the record.