← Back to blog

Value Selling

Why Your CFO Doesn't Trust Your ROI Model

Serre TeamMar 12, 20266 min read

Key takeaways

    Somewhere right now, a CFO is looking at a vendor ROI model for the first time. They'll spend about 30 seconds on it. In those 30 seconds, they're not reading your executive summary or admiring your formatting. They're looking for three things: where the numbers come from, whether they can change the assumptions, and whether the methodology is something their finance team would recognize.

    Most vendor ROI models fail all three checks. Here's why — and what to do about it.

    The Smell Test

    CFOs review financial models for a living. They've seen hundreds of internal business cases, capital expenditure requests, and M&A analyses. Every one of those documents follows a pattern: stated assumptions, visible formulas, sensitivity analysis, and a clear methodology section. When a vendor ROI model arrives and it's a polished slide with a big number and no supporting detail, the CFO's reaction isn't skepticism. It's dismissal.

    The problem isn't that your numbers are wrong. It's that there's no way for the CFO to determine whether they're right. An opaque model is an untrustworthy model, regardless of how accurate it actually is.

    Single-Point Estimates Are Structurally Indefensible

    Most vendor business cases present a single number: "Your company will save $2.4M annually." This is the worst possible way to express a financial projection, and finance teams know it immediately.

    Every financial model involves uncertainty. Costs fluctuate. Adoption rates vary. Productivity improvements depend on implementation quality, user behavior, and a dozen other variables. A single number pretends none of that uncertainty exists. It's not a forecast — it's a wish.

    Compare this to how the CFO's own team builds internal models. They use ranges. They run sensitivity analyses. They present best-case, expected, and conservative scenarios — or better yet, probability distributions. When your model shows up with a single point estimate and their internal models use Monte Carlo simulation, the credibility gap is instant and fatal.

    The Attribution Problem

    Even when a vendor's total value estimate is reasonable, there's a deeper question the CFO always asks: "How much of that is actually your product?"

    Consider a claim like "40% reduction in mean time to resolution." Maybe that's plausible across the industry. But how much of that reduction comes from the software, how much from the process changes that accompany the implementation, and how much from the team simply paying more attention because there's a new tool? The vendor's model attributes everything to the product. The CFO knows that's not how it works.

    This is the attribution question, and it's where most vendor ROI models collapse entirely. Not because the total value is wrong, but because the model has no mechanism for acknowledging what it doesn't know.

    The attribution question

    The fix isn't to lower your numbers. It's to express them as probability ranges with explicit confidence bounds. Wide bounds on a variable signal honest uncertainty — and paradoxically make the rest of the model more credible.

    What Transparency Actually Means

    Some vendors respond to the trust problem by "showing their work" — adding a page to the deck that lists their assumptions. This is better than nothing, but it's not transparency. Transparency means the buyer can see the formula, change the inputs, and watch the output respond.

    There's a fundamental difference between these two approaches:

    • Assertion: "Based on industry benchmarks, your company will save $2.4M annually." The buyer has to take your word for it.
    • Model: "Here's the formula: incidents_per_month × mttr_reduction × cost_per_incident_minute × 12. Here are the inputs, sourced from your industry benchmarks. Adjust anything you disagree with." The buyer can verify it themselves.

    The first approach puts the burden of trust on the vendor. The second puts the burden of disproof on the skeptic. That's a fundamentally different negotiating position.

    Deterministic formulas with named variables aren't just more transparent — they're more flexible. The prospect can override any assumption. Their finance team can plug in their own data. The model becomes a collaborative tool rather than a vendor assertion.

    Monte Carlo: The Answer to "I Don't Believe Your Number"

    If transparency solves the "where does this come from?" problem, Monte Carlo simulation solves the "how confident are you?" problem.

    Instead of presenting a single estimate, you run the model 10,000 times. Each iteration samples the input variables from triangular distributions defined by low, expected, and high bounds. The result isn't a number — it's a probability distribution.

    That distribution gets expressed as percentiles:

    • P10 (Conservative): There's a 90% chance the actual outcome exceeds this value.
    • P50 (Expected): The median outcome — equally likely to be above or below.
    • P90 (Optimistic): Only a 10% chance the outcome is this high or higher.

    3.2x

    median ROI at P50 across Serre-built business cases

    Serre platform data, 2026

    This reframes the conversation entirely. Instead of "do you believe $2.4M?" it becomes "even in the conservative case, this investment pays back in 9 months. In the expected case, it's a 3.2× return." The CFO doesn't have to trust your number. They can evaluate the range and decide which scenario they find credible.

    This is exactly how the CFO's own team models uncertainty. When you speak their language, you earn a different kind of attention.

    What Good Value Modeling Looks Like

    The fix for vendor ROI distrust isn't better numbers. It's better methodology. Specifically:

    • Visible formulas with named variables that the prospect can audit line by line
    • Editable assumptions so the buyer can substitute their own data and watch the model respond
    • Industry benchmarks with sources — not "industry data shows" but specific operational benchmarks tied to company size, industry, and functional area
    • Probability ranges instead of point estimates, generated through Monte Carlo simulation with explicit confidence bounds on every variable
    • Configurable value drivers that map to specific product capabilities, not vague "productivity improvement" claims

    The shift isn't cosmetic. It's structural. You're moving from "trust our estimate" to "interrogate our model." From a templated spreadsheet with last quarter's assumptions to an intelligent model built for this prospect's context.

    The CFO who dismissed your ROI slide in 30 seconds will spend 30 minutes with a model they can take apart. That's the difference between a vendor pitch and an investment analysis.

    Build a credible investment case in under 10 minutes.

    Three free cases. No credit card required.

    Start free trial →