← Back to blog

Methodology

Monte Carlo for Sales: A Practical Guide

Serre TeamFeb 14, 20268 min read

Key takeaways

    "Sure, your product saves time. But how much time? For us? Really?"

    If you've sold into enterprise accounts, you've heard some version of this. It's the attribution question — the moment where the buyer stops evaluating your product's capabilities and starts evaluating your numbers. And it's where most value conversations fall apart.

    The problem isn't that your estimate is wrong. It's that any single number you give is either too high (they don't believe it) or too low (it doesn't justify the spend). You're stuck. Defend the number and you sound like a salesperson. Hedge it and you sound uncertain.

    There's a better approach: stop presenting a number and start presenting a model. That model is Monte Carlo simulation. And despite the name, it doesn't require a statistics degree to use or to explain.

    What Monte Carlo Simulation Actually Is

    Monte Carlo simulation is a method for understanding the range of possible outcomes when your inputs are uncertain. Instead of calculating a single answer from a single set of assumptions, you run thousands of calculations — each with slightly different inputs — and observe the distribution of results.

    In practice, it works like this:

    1. You define a formula: the deterministic relationship between your inputs and the output. For example, annual savings = hours saved per week × hourly cost × number of affected employees × 52 weeks.
    2. For each input variable, instead of a single value, you specify a range: a low estimate, an expected estimate, and a high estimate. These three points define a triangular distribution — a simple probability shape that peaks at your expected value and tapers toward the extremes.
    3. You run 10,000 iterations. In each iteration, every variable is sampled independently from its triangular distribution. The formula is evaluated with those sampled values, producing one possible outcome.
    4. After 10,000 iterations, you sort the results and read off percentiles. P10 (the 10th percentile) is the conservative case — 90% of simulated outcomes exceeded this value. P50 (the median) is the expected case. P90 (the 90th percentile) is the optimistic case — only 10% of simulations did better.

    That's the entire method. The formula itself is deterministic and fully auditable. The uncertainty comes from the explicit confidence bounds you set on each variable — not from any black box.

    Why Triangular Distributions, Not Normal Distributions

    If you remember anything from statistics class, you probably remember the bell curve. So why triangular?

    Two reasons. First, triangular distributions are defined by three values that a human can actually estimate: the minimum plausible value, the most likely value, and the maximum plausible value. Ask a customer "what's the low, expected, and high end for this number?" and you get a usable answer. Ask them to estimate a mean and standard deviation and you get a blank stare.

    Second, triangular distributions have bounded tails. A normal distribution technically extends to infinity in both directions, which means your simulation can produce absurd outliers. A triangular distribution stays within the bounds your stakeholders agreed to. Every simulated outcome is defensible because it falls within a range that someone explicitly signed off on.

    For discovery conversations

    The easiest way to get triangular distribution inputs from a prospect: ask three questions. "What's the absolute floor for this metric — the worst realistic case?" Then "What do you expect it actually is?" Then "What would a really good outcome look like?" That's your low, mode, and high. Done.

    A Worked Example

    Let's make this concrete. Suppose you're building a value case around a "manual process automation" driver. The formula is straightforward:

    Annual Savings = Hours per Week × Hourly Rate × Affected Employees × 52

    Your prospect is a mid-market operations team. After discovery, you and the prospect agree on ranges for each variable:

    | Variable | Low | Expected | High | |---|---|---|---| | Hours saved per week per employee | 3 | 5 | 7 | | Fully loaded hourly rate | $55 | $65 | $75 | | Number of affected employees | 30 | 45 | 60 |

    If you use just the expected values, you get a single point estimate:

    5 hours × $65 × 45 employees × 52 weeks = $760,500/year

    That's a fine number. But it's also a single number — one the CFO can poke at. "Do we really have 45 people affected? Is it really 5 hours?" Each challenge to any single variable undermines the entire output.

    Now run Monte Carlo. After 10,000 runs, the sorted results might look like this:

    • P10 (conservative): $468,000/year
    • P50 (expected): $731,000/year
    • P90 (optimistic): $1,080,000/year

    Notice that the P50 isn't exactly the same as the point estimate from the expected values. That's correct — the median of a distribution of products isn't the product of the medians. This is one of the subtle ways Monte Carlo is more honest than a simple three-scenario model.

    Why This Beats Best/Worst/Middle

    You might be thinking: "I already do a three-scenario analysis. Low case, base case, high case. Same thing." It's not.

    A three-scenario model gives you exactly three outcomes. Each one uses a single set of inputs — pessimistic inputs for the low case, expected inputs for the base case, optimistic inputs for the high case. But real outcomes don't work this way. In reality, one variable might come in high while another comes in low. The hourly rate might be closer to $75 while the number of affected employees is closer to 30. Three scenarios can't capture that combination.

    Monte Carlo explores all 10,000 combinations. Some iterations pair a high hourly rate with a low headcount. Others pair moderate values across the board. The result is a continuous distribution of outcomes, not three disconnected guesses. That distribution tells you something a three-scenario model never can: the probability that the actual outcome falls within any given range.

    A three-scenario model says "the base case is $760K." Monte Carlo says "there's about an 80% chance the actual outcome falls between $468K and $1.08M, with $731K as the median expectation." One is a claim. The other is a model.

    Why CFOs Trust Ranges More Than Point Estimates

    Finance teams evaluate risk for a living. When you present a single ROI number, a CFO's trained instinct is to stress-test it. Which assumptions are too aggressive? Where's the hidden optimism? How far does the number drop if one input is wrong?

    When you present a probability range, you've already done that stress testing. The P10 figure is the answer to "what if things don't go as planned?" — and it's right there on the page. You're not defending a number anymore. You're walking through a model together.

    This changes the dynamic of the conversation. Instead of the buyer picking apart your assumptions, they're discussing which assumptions they'd adjust. "We think the affected headcount is more like 35-50, not 30-60." That's a collaborative conversation about inputs, not a confrontational one about outputs. And because the formulas are deterministic — visible and auditable — the prospect can trace any output back to the assumptions that produced it.

    The goal isn't to make a bigger claim. It's to make a defensible one. A range that acknowledges uncertainty is more credible than a point estimate that pretends uncertainty doesn't exist.

    How This Works in Practice

    Each value driver in a business case has a formula and a set of input variables. Each variable has a point estimate (the expected value) and confidence bounds — the low and high ends of the range. These bounds are set explicitly. There's no hidden algorithm deciding how uncertain a number should be.

    When the simulation runs, it samples each variable independently from its triangular distribution. The formula is evaluated with those sampled values. After 10,000 iterations, the results are sorted and the percentiles are extracted. The entire process is deterministic given the same seed — run it twice with the same inputs and you get the same output.

    For a multi-driver business case, each driver is simulated independently and the results are summed per iteration. This means the total P10 isn't simply the sum of each driver's P10 — it accounts for the statistical reality that it's unlikely for every driver to simultaneously hit its worst case. The portfolio effect works in your favor: diversification across multiple value drivers produces a tighter overall distribution than any single driver alone.

    What the Buyer Actually Sees

    The buyer doesn't see 10,000 rows of data. They see three numbers:

    • P10 — the conservative case. "Even in a downside scenario, we expect at least this much value." This is the number the CFO will anchor on. It's the one that needs to clear the hurdle rate on its own.
    • P50 — the expected case. "This is the median outcome across thousands of simulated scenarios." It carries more weight than a point estimate because it was derived from a distribution, not picked from a spreadsheet.
    • P90 — the optimistic case. "If conditions are favorable, the upside is this." It grounds the buyer's aspirational thinking without asking them to take it on faith.

    Crucially, each of these numbers traces back to specific variable ranges. If a buyer questions the P50, you don't defend the output — you walk through the inputs. "This assumes 3-7 hours per week saved, with 5 as the most likely. Does that match your experience?" The conversation stays collaborative because the model is transparent.

    Stop Defending a Number. Present a Model.

    The attribution question — "how much value can you actually deliver for us?" — doesn't have a single right answer. Pretending it does is what makes ROI conversations adversarial. The buyer knows your number is a guess. You know your number is a guess. Everyone is performing certainty that doesn't exist.

    Monte Carlo simulation replaces that performance with honesty. It says: "We don't know the exact outcome. Nobody does. But given the ranges we've agreed on together, here's the probability-weighted distribution of outcomes." That's not a weaker claim. It's a stronger one — because it's the only claim that survives contact with a finance team.

    The math behind it is straightforward: triangular distributions, independent sampling, 10,000 iterations, sorted percentiles. No black boxes. No AI-generated numbers. Just a transparent model that turns uncertain inputs into defensible ranges.

    That's how you stop arguing about a number and start discussing a model. And models, unlike numbers, are hard to dismiss.

    Build a credible investment case in under 10 minutes.

    Three free cases. No credit card required.

    Start free trial →