Register  |   Contact Us  | 

Home » CEB Sales Blog, Sales & Service » The Secret to Measuring the ROI of Sales Training

The Secret to Measuring the ROI of Sales Training

sales trainingSales training is a hot topic right now: 76% of companies are increasing their focus on training, while only 51% are spending more money. Consequently, sales effectiveness leaders are under more scrutiny than ever to prove their worth. Usually, this means providing senior executives with a hard ROI number—but training ROI has never been an easy number for anyone to determine.  Until now.

[SEC members, be sure to sign up for our February 3rd meeting, Boosting Sales Training Stickiness, and our January 20th webinar, “Making it Easy to Apply Complex Sales Skills in the Field”.]

There’s one main reason that most methods of ROI measurement are messy when applied to sales training: it’s impossible to retroactively attribute a specific deal or revenue gain to a specific training with any certainty.  Quite simply, there are too many other variables involved in whether or not a deal closes to know whether it was the training itself that caused that result.

For example, a typical approach companies use to measure ROI is to survey reps on whether or not they applied their training in closing sales, or how many deals they think they’ve won because of a training.  The problem with this approach is that there’s no way to tell if the sale was made because of that specific training or because of something else—maybe the customer was already ripe for the sale, or maybe the rep just lucked upon the right price.

And even if it was certain that the customer had bought because of the new skill the rep learned in training, it’s entirely possible that the rep may have applied the behavior out of sheer luck even if he hadn’t been trained on it.

Thankfully, Automatic Data Processing (ADP)—the payroll and tax processing firm—has discovered a clever way around this problem.

ADP, like a few companies we’ve surveyed in the past, constructs all of its classroom training around actual deals reps are working on out in the field.  Before training, each rep chooses an actual account from his/her territory, and uses this account in all exercises and role plays throughout training.  That way, when the rep goes back to apply the training in the field, you have a real-world result to measure that comes directly from the training.

Now here’s the kicker: ADP requires that all deals reps bring to training are “stuck” deals—previously called-on accounts with little to no hope of closing. After training, reps then re-approach these accounts using their new skills, flagging them in

Ken Powell, VP of Worldwide Sales Enablement at ADP, sums up their methodology nicely:

“Our training is wrapped exclusively around stalled or lost deals. By using actual accounts and active sales opportunities, we are able to immediately impact sales results, closely track the program’s success, and gain more funding.”

The genius of ADP’s approach is that using stuck deals in training automatically controls for almost everything else that could have caused the win.  You know nothing else contributed to this deal closure because the rep already tried selling to that account and nothing had worked.  But if a rep goes back to that account after training and it suddenly shakes loose, it’s a fair assumption that the training was one of the only variables that changed.

In other words, stuck deals provide a point of reference for measuring movement in the pipeline, rather than trying to mathematically attribute a single cause to a specific result.  It’s not how you measure—it’s what you measure.

What do you think of this approach?  How do you measure the ROI of your sales training?

Comments from the Network (2)

  1. Dave Baldwin
    on January 20, 2012

    Applying training exclusively to stuck deals is a good concept – IF your sales objective is to revive dead accounts or make something happen with a stuck deal. But what if that’s not your goal? I would argue that if a sales organization is looking to, for instance, expand into new territory or markets where they have no history, this particular ROI measurement technique may be less effective (though I’m sure the exercise would still be useful).

  2. Kevin Avery
    on August 29, 2013

    It’s a good idea where it fits, but probably applies only if you’re tweaking rather than trying to transform selling. That said, in a pilot I ran we happened to attract 100% stalled deals or not-better-than jump-ball competitive displacement attempts, so we too were able to attribute much of the success (we risk-adjusted the result by applying an artificially high close rate to estimate “what we’d have sold anyway”) to our new approach.

    An even more seminal question is whether sales training is a good idea at all (in a given circumstance). In many organizations, the existence of a training organization leads to “training is the answer, what’s your question”. We’ve all seen statistics–and we all know they’re accurate if we’re honest–that say that 98% (pick your own high number) of sales training is waste. (I know, I know, ours is always in the 2%…and our kids are all above average and so on)

    So we should ask ourselves whether a more-guild-inspired, experiential approach is better suited. The answer is yes whenever the focus is on skills rather than knowledge transfer, and especially when the risk context is high (as in changing selling behavior). Enablement of this type is more complex, but it’s the job of leadership to produce something simple for execution from that which is by nature complex and fraught with ambiguity.

Add Your Comment

Commenting Guidelines

We hope conversations will be energetic, constructive, and provocative. All posts will be reviewed by our editors and may be edited for clarity, length, and relevance.

We ask that you adhere to the following guidelines.

1. No selling of products or services.

2. No ad hominem attacks. These are conversations in which we debate ideas. Criticize ideas, not the people behind them.