"Sure, your product saves time. But how much time? For us? Really?"
If you've sold into enterprise accounts, you've heard some version of this. It's the attribution question — the moment where the buyer stops evaluating your product's capabilities and starts evaluating your numbers. And it's where most value conversations fall apart.
The problem isn't that your estimate is wrong. It's that any single number you give is either too high (they don't believe it) or too low (it doesn't justify the spend). You're stuck. Defend the number and you sound like a salesperson. Hedge it and you sound uncertain.
There's a better approach: stop presenting a number and start presenting a model. That model is Monte Carlo simulation. And despite the name, it doesn't require a statistics degree to use or to explain.
Monte Carlo simulation is a method for understanding the range of possible outcomes when your inputs are uncertain. Instead of calculating a single answer from a single set of assumptions, you run thousands of calculations — each with slightly different inputs — and observe the distribution of results.
In practice, it works like this:
That's the entire method. The formula itself is deterministic and fully auditable. The uncertainty comes from the explicit confidence bounds you set on each variable — not from any black box.
If you remember anything from statistics class, you probably remember the bell curve. So why triangular?
Two reasons. First, triangular distributions are defined by three values that a human can actually estimate: the minimum plausible value, the most likely value, and the maximum plausible value. Ask a customer "what's the low, expected, and high end for this number?" and you get a usable answer. Ask them to estimate a mean and standard deviation and you get a blank stare.
Second, triangular distributions have bounded tails. A normal distribution technically extends to infinity in both directions, which means your simulation can produce absurd outliers. A triangular distribution stays within the bounds your stakeholders agreed to. Every simulated outcome is defensible because it falls within a range that someone explicitly signed off on.
For discovery conversations
The easiest way to get triangular distribution inputs from a prospect: ask three questions. "What's the absolute floor for this metric — the worst realistic case?" Then "What do you expect it actually is?" Then "What would a really good outcome look like?" That's your low, mode, and high. Done.
Let's make this concrete. Suppose you're building a value case around a "manual process automation" driver. The formula is straightforward:
Annual Savings = Hours per Week × Hourly Rate × Affected Employees × 52
Your prospect is a mid-market operations team. After discovery, you and the prospect agree on ranges for each variable:
| Variable | Low | Expected | High | |---|---|---|---| | Hours saved per week per employee | 3 | 5 | 7 | | Fully loaded hourly rate | $55 | $65 | $75 | | Number of affected employees | 30 | 45 | 60 |
If you use just the expected values, you get a single point estimate:
5 hours × $65 × 45 employees × 52 weeks = $760,500/year
That's a fine number. But it's also a single number — one the CFO can poke at. "Do we really have 45 people affected? Is it really 5 hours?" Each challenge to any single variable undermines the entire output.
Now run Monte Carlo. After 10,000 runs, the sorted results might look like this:
Notice that the P50 isn't exactly the same as the point estimate from the expected values. That's correct — the median of a distribution of products isn't the product of the medians. This is one of the subtle ways Monte Carlo is more honest than a simple three-scenario model.
You might be thinking: "I already do a three-scenario analysis. Low case, base case, high case. Same thing." It's not.
A three-scenario model gives you exactly three outcomes. Each one uses a single set of inputs — pessimistic inputs for the low case, expected inputs for the base case, optimistic inputs for the high case. But real outcomes don't work this way. In reality, one variable might come in high while another comes in low. The hourly rate might be closer to $75 while the number of affected employees is closer to 30. Three scenarios can't capture that combination.
Monte Carlo explores all 10,000 combinations. Some iterations pair a high hourly rate with a low headcount. Others pair moderate values across the board. The result is a continuous distribution of outcomes, not three disconnected guesses. That distribution tells you something a three-scenario model never can: the probability that the actual outcome falls within any given range.
A three-scenario model says "the base case is $760K." Monte Carlo says "there's about an 80% chance the actual outcome falls between $468K and $1.08M, with $731K as the median expectation." One is a claim. The other is a model.
Finance teams evaluate risk for a living. When you present a single ROI number, a CFO's trained instinct is to stress-test it. Which assumptions are too aggressive? Where's the hidden optimism? How far does the number drop if one input is wrong?
When you present a probability range, you've already done that stress testing. The P10 figure is the answer to "what if things don't go as planned?" — and it's right there on the page. You're not defending a number anymore. You're walking through a model together.
This changes the dynamic of the conversation. Instead of the buyer picking apart your assumptions, they're discussing which assumptions they'd adjust. "We think the affected headcount is more like 35-50, not 30-60." That's a collaborative conversation about inputs, not a confrontational one about outputs. And because the formulas are deterministic — visible and auditable — the prospect can trace any output back to the assumptions that produced it.
The goal isn't to make a bigger claim. It's to make a defensible one. A range that acknowledges uncertainty is more credible than a point estimate that pretends uncertainty doesn't exist.
Each value driver in a business case has a formula and a set of input variables. Each variable has a point estimate (the expected value) and confidence bounds — the low and high ends of the range. These bounds are set explicitly. There's no hidden algorithm deciding how uncertain a number should be.
When the simulation runs, it samples each variable independently from its triangular distribution. The formula is evaluated with those sampled values. After 10,000 iterations, the results are sorted and the percentiles are extracted. The entire process is deterministic given the same seed — run it twice with the same inputs and you get the same output.
For a multi-driver business case, each driver is simulated independently and the results are summed per iteration. This means the total P10 isn't simply the sum of each driver's P10 — it accounts for the statistical reality that it's unlikely for every driver to simultaneously hit its worst case. The portfolio effect works in your favor: diversification across multiple value drivers produces a tighter overall distribution than any single driver alone.
The buyer doesn't see 10,000 rows of data. They see three numbers:
Crucially, each of these numbers traces back to specific variable ranges. If a buyer questions the P50, you don't defend the output — you walk through the inputs. "This assumes 3-7 hours per week saved, with 5 as the most likely. Does that match your experience?" The conversation stays collaborative because the model is transparent.
The attribution question — "how much value can you actually deliver for us?" — doesn't have a single right answer. Pretending it does is what makes ROI conversations adversarial. The buyer knows your number is a guess. You know your number is a guess. Everyone is performing certainty that doesn't exist.
Monte Carlo simulation replaces that performance with honesty. It says: "We don't know the exact outcome. Nobody does. But given the ranges we've agreed on together, here's the probability-weighted distribution of outcomes." That's not a weaker claim. It's a stronger one — because it's the only claim that survives contact with a finance team.
The math behind it is straightforward: triangular distributions, independent sampling, 10,000 iterations, sorted percentiles. No black boxes. No AI-generated numbers. Just a transparent model that turns uncertain inputs into defensible ranges.
That's how you stop arguing about a number and start discussing a model. And models, unlike numbers, are hard to dismiss.
Three free cases. No credit card required.
Start free trial →