Cadence AdvisorsAdvisors Group

Methodology

Diagnose. Design. Deploy. Measure.

Every engagement at Cadence — from a 60-minute micro-session to a 16-week enterprise build — runs on the same four-phase discipline. The phases are not a waterfall; they’re a loop. We cycle through them at whatever tempo the engagement length supports.

Phase 01

Diagnose

01

Find the real constraint before proposing anything. Most AI projects fail because they solved a visible problem instead of the binding one.

  • Workflow ride-along — we watch the work happen, we don’t ask the team to describe it.
  • Theory of Constraints read on the end-to-end system. Where’s throughput actually limited? It’s rarely where the team says.
  • Wardley map of the capability stack so we can see which components are commodifying under AI pressure and which are gaining leverage.
  • Honest inventory of data, tooling, and political constraints. AI strategy that ignores org dynamics is academic fiction.

Tools & frameworks: Wardley Mapping (Simon Wardley), Theory of Constraints (Goldratt), workflow shadowing, data-quality scoring.

Phase 02

Design

02

Pick the right intervention for the constraint you found. Design on paper before spending a dollar on tokens.

  • Explicit choice between automation (fully agentic), augmentation (human-in-loop), and refusal (this workflow stays manual, and here’s why).
  • Tool and model selection with trade-offs in writing. Claude vs. GPT-5 vs. open-weights for your specific workload, with the reasoning shown.
  • Prompt architecture, eval harness, and failure-mode inventory designed before any production code is written.
  • Rollback plan and kill criteria defined at design time, not after the fire.

Tools & frameworks: Structured decision logs, prompt design patterns, eval-harness templates (OpenAI Evals, LangSmith, custom), cost modelling against realistic token budgets.

Phase 03

Deploy

03

Ship into production on a one-week cadence. OODA loops on a weekly tempo is how we keep senior attention on what’s shipping and kill what isn’t.

  • Weekly ship-or-kill reviews — no workflow stays in pilot purgatory past its second week without an explicit decision.
  • Pair implementation with your team. We work inside Cursor, Claude Projects, and your existing orchestration layer (n8n, Zapier AI, Temporal) so the work stays in systems you control.
  • Observability baked in from day one: logs, traces, cost tracking, and a simple drift dashboard.
  • Knowledge transfer happens as the work happens. The team that will own the workflow is the team that builds it.

Tools & frameworks: Claude, GPT-5, Cursor, n8n, Zapier AI, Temporal, LangSmith, Helicone, custom eval harnesses, Notion runbooks.

Phase 04

Measure

04

Prove what it changed or decommission it. AI without measurement is theater, and theater is expensive in tokens.

  • Baseline captured before deployment, not reconstructed from memory after the fact.
  • Three outcome metrics per workflow, agreed in Phase 2 and locked before launch.
  • 30-day observation window with a written decision: keep, expand, modify, or kill.
  • Post-mortem on every workflow that doesn’t hit its target. We learn more from the kills than from the wins and we don’t hide either.

Tools & frameworks: Before/after metric dashboards, cost-per-outcome tracking, drift and hallucination monitors, quarterly written decision memos.

Operating principles

The methodology is the how. These are the why.

Pace, not speed

We named the firm Cadence for a reason. AI projects fail from the wrong tempo as often as the wrong technology — either rushing past diagnosis or stalling in pilot. A defensible cadence is the artifact.

Writing is the work

Every decision gets written down before it gets built. If we can’t write the argument clearly, we don’t understand it yet.

Reversibility is a feature

We prefer workflows that can be rolled back, compared against a control, and modified by your team. Irreversible commitments get the highest scrutiny.

Senior attention per dollar

Every engagement is run by someone who has shipped AI in production. We don’t leverage junior analysts because the work doesn’t benefit from leverage — it benefits from judgment.

The intellectual lineage

Our methodology sits at the intersection of four older disciplines. We name them because good thinking is borrowed, not invented, and naming your sources is honest.

Wardley Mappinggives us the visual grammar for strategy. Simon Wardley’s work on situational awareness is the best tool we’ve found for seeing which capabilities are commodifying under AI pressure and which are becoming newly valuable.

OODA loops (John Boyd) give us the tempo discipline. Weekly observe-orient-decide-act cycles are how we keep engagements from drifting into permanent pilot mode.

Theory of Constraints (Eliyahu Goldratt) is how we choose what to optimize. AI applied to a non-bottleneck is a speed-up for a queue that was already waiting on something else.

The Stripe operating doctrine— writing as the mode of thought, rigorous post-mortems, and radical respect for the reader’s time — is how we run ourselves as a firm. Two of our founding partners came from there; the habits stuck.

See the methodology applied

A micro-session is the cheapest way to see how we think. Ninety minutes of prep, a five-page doc, and you’ll know whether our approach fits your situation.