Blast Radius AI: Teaching Claude Which Files Actually Matter For A PR
Code-Review-Graph computes the impact radius of a change with 100% recall — every caller, dependent, and test touched by your diff, in milliseconds.
Asking an AI to review a PR is asking it to imagine the rest of your codebase. Most tools fake that with vector retrieval and pray. Code-Review-Graph computes the actual blast radius — every caller, callee, dependent, and test affected by the diff — in milliseconds, with 100% recall.
The Algorithm
flowchart TD
START([git diff]) --> CHANGED[Changed files set C]
CHANGED --> SHA[SHA-256 diff vs cached hash]
SHA -->|unchanged| SKIP[Skip — reuse graph]
SHA -->|changed| REPARSE[Re-parse changed files only]
REPARSE --> UPDATE[(Update SQLite graph)]
UPDATE --> SEED[Seed nodes = changed symbols]
SEED --> BFS[Bounded BFS traversal]
BFS --> EDGES{Edge type}
EDGES -->|calls| CALLERS[+ callers]
EDGES -->|called_by| CALLEES[+ callees]
EDGES -->|tested_by| TESTS[+ tests]
EDGES -->|inherits| HIER[+ subclasses]
EDGES -->|imports| IMP[+ importers]
CALLERS & CALLEES & TESTS & HIER & IMP --> SCORE[Risk score per node]
SCORE --> RANK[Rank by centrality + change proximity]
RANK --> BUDGET{Token budget}
BUDGET -->|fits| SET[Minimal review set]
BUDGET -->|over| TRIM[Trim by lowest score]
TRIM --> SET
SET --> AGENT[Claude / reviewer]
style START fill:#fbbf24
style SET fill:#22c55e,stroke:#15803d,color:#fff
style AGENT fill:#a855f7,stroke:#7e22ce,color:#fff
Why 100% Recall Matters More Than Precision
Code review has an asymmetric error cost. Missing a broken caller = production incident. Including an unnecessary file = a few extra tokens. Code-Review-Graph leans aggressively toward over-prediction (F1 score around 0.54), trading some precision for guaranteed recall.
The benchmark numbers tell the story: 100% recall on impact analysis, never silently dropping a broken dependency. If a function call exists in the AST, the graph finds it.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Risk Scoring
Not every node in the blast radius matters equally. Code-Review-Graph layers risk signals onto graph results:
- Centrality: Hub nodes (high betweenness) score higher
- Change proximity: Direct callers > 2-hop > 3-hop
- Test coverage gap: Untested hotspots flagged
- Recent churn: Files changed multiple times in last 30 days are riskier
The reviewer (human or agent) gets a ranked list, not a flat dump.
Plugging Into PR Workflows
The pattern that works in production:
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
- GitHub Actions trigger on
pull_request - Workflow runs
code-review-graph build --incremental - Then
code-review-graph impact --files=$CHANGED_FILES --format=json - Pipe the result into Claude via MCP for review
- Post inline comments via GitHub API
Total runtime on a 1,000-file repo: under 5 seconds. Total token spend per review: 1.5K–3K instead of 30K–50K.
Bonus: Knowledge Gap Detection
Same graph, different query. Code-Review-Graph surfaces:
- Untested hotspots: high-centrality nodes with no test edges
- Isolated nodes: orphan modules that nothing imports (dead code candidates)
- Bridge nodes: single-file dependencies between communities (refactor risk)
Run weekly. Send the report to engineering. Watch your tech debt backlog become actionable.
## Blast Radius AI: Teaching Claude Which Files Actually Matter For A PR — operator perspective Practitioners building blast Radius AI keep rediscovering the same trade-off: more autonomy means more surface area for things to go wrong. The art is giving the agent enough room to be useful without giving it room to spiral. That contract is what separates a demo from a production system. CallSphere learned this the expensive way while wiring 37 specialized agents to 90+ tools across 115+ database tables — every integration that didn't enforce schemas at the tool boundary eventually paged someone. ## Why this matters for AI voice + chat agents Agentic AI in a real call center is a different beast than a single-LLM chatbot. Instead of one model answering one prompt, you orchestrate a small team: a router that decides intent, specialists that own a vertical (booking, intake, billing, escalation), and tools that read and write to the same Postgres your CRM trusts. Hand-offs are where most production bugs hide — when Agent A passes context to Agent B, anything that isn't explicit in the message gets lost, and the user feels it as the agent "forgetting." That's why the systems that hold up under load are the ones with typed tool schemas, deterministic state stored outside the conversation, and a hard ceiling on tool calls per session. The cost story is just as important: a multi-agent loop can quietly burn 10x the tokens of a single-LLM design if you let it think out loud at every step. The fix isn't a smarter model, it's smaller agents, shorter prompts, cached system messages, and evals that fail the build when p95 latency or per-session cost regresses. CallSphere runs this pattern across 6 verticals in production, and the rule has held every time: the agent you can debug in five minutes will out-survive the agent that's "smarter" on a benchmark. ## FAQs **Q: Why does blast Radius AI need typed tool schemas more than clever prompts?** A: Scaling comes from constraint, not capability. The deployments that hold up keep each agent narrow, cap tool calls per turn, cache the system prompt, and pin a smaller model for routing while reserving the larger model for synthesis. CallSphere's stack — 37 agents · 90+ tools · 115+ DB tables · 6 verticals live — is sized that way on purpose. **Q: How do you keep blast Radius AI fast on real phone and chat traffic?** A: Hard ceilings beat heuristics. A maximum step count, an idempotency key on every tool call, and a fallback to a deterministic script when confidence drops below a threshold are what keep the loop bounded. Evals that simulate noisy inputs catch the rest before they reach a real caller. **Q: Where has CallSphere shipped blast Radius AI for paying customers?** A: It's already in production. Today CallSphere runs this pattern in Healthcare and Sales, alongside the other live verticals (Healthcare, Real Estate, Salon, Sales, After-Hours Escalation, IT Helpdesk). The same orchestrator code path serves voice and chat — the difference is the tool set the router exposes. ## See it live Want to see healthcare agents handle real traffic? Spin up a walkthrough at https://healthcare.callsphere.tech or grab 20 minutes on the calendar: https://calendly.com/sagar-callsphere/new-meeting.Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.