Cadence AdvisorsAdvisors Group
All case studies
Financial Services14-week build, 8-week parallel-run validation22 weeks

Condensed underwriting decision time from 6 days to 11 hours at a mid-market fintech

Client: Mid-market Fintech (SMB Working Capital Lender)

A working-capital lender's underwriting queue was bottlenecked on document review. We built a document-intake and pre-decision system that compressed average time-to-decision from 6.1 business days to 11 working hours while holding default rate flat against the control cohort.

of applications pre-decisioned without human review
92%
default rate vs. control (within 0.2pts)
flat
application-to-funded conversion
+34%
underwriters redeployed to portfolio monitoring
11 of 14

Challenge

The client funds SMB working capital lines between $50k and $1.5M, and was losing deals to faster competitors. Their underwriting process required human review of bank statements, tax returns, and accounts-receivable aging reports — roughly 40-60 pages per application. Average time-to-decision was 6.1 business days; competitors were quoting 48 hours. Sales was losing roughly a third of qualified applicants to the delay.

The previous CTO had tried to solve this by hiring offshore reviewers, which collapsed under training overhead, and then by licensing a document-AI vendor whose extraction accuracy was 78% — too low to trust on a loan book where a misread of current-period revenue is a six-figure write-off waiting to happen. The client needed extraction reliable enough to feed a credit model directly, and they needed it auditable enough to explain to regulators.

Approach

Extraction had to clear 99%+ on the fields that entered the credit model (trailing-12-month revenue, NSF counts, average daily balance, top-5-customer concentration). Everything else could be lower-confidence because a human would see it only if the application was a borderline case anyway.

We built a document-classification layer (bank statement vs. tax return vs. AR aging) feeding into field-specific extractors, each with its own evaluation set built from 3,200 historical applications that had been hand-labeled by their senior underwriter. Every extracted field carried a confidence score and a citation back to the source page; the credit model could refuse to decision if any input field came in below threshold, kicking those applications to a human with the low-confidence fields flagged.

We ran the system in parallel with the human underwriting team for eight weeks. The system's decision was logged but not binding, and a weekly reconciliation meeting reviewed every disagreement between model and human. Those reviews produced 31 material prompt and schema adjustments before we flipped the system into binding mode.

Outcome

The system now pre-decisions 92% of applications without human review. Average time-to-decision fell from 6.1 business days to 11 working hours (median is 4 hours 12 minutes). Default rate is within 0.2 percentage points of the historical control cohort — statistically indistinguishable. Application-to-funded conversion is up 34%, directly attributable to reaching applicants before competitors do. Eleven of the fourteen underwriters have been redeployed into portfolio monitoring and early-warning work, which the lender had historically deferred. The remaining three handle the 8% of edge cases the system flags, which is roughly the workload one underwriter would have carried before.

Stack

  • Claude Opus for extraction + reasoning
  • Internal classifier in-house
  • Decision ledger on Postgres
  • Airflow + Kafka orchestration

Working on something similar?

A partner will respond personally within one business day. If there isn't a fit, we'll tell you that, and point you somewhere better.