Sales training is a hot topic right now: 76% of companies are increasing their focus on training, while only 51% are spending more money. Consequently, sales effectiveness leaders are under more scrutiny than ever to prove their worth. Usually, this means providing senior executives with a hard ROI number—but training ROI has never been an easy number for anyone to determine. Until now.
[SEC members, be sure to sign up for our February 3rd meeting, Boosting Sales Training Stickiness, and our January 20th webinar, “Making it Easy to Apply Complex Sales Skills in the Field”.]
There’s one main reason that most methods of ROI measurement are messy when applied to sales training: it’s impossible to retroactively attribute a specific deal or revenue gain to a specific training with any certainty. Quite simply, there are too many other variables involved in whether or not a deal closes to know whether it was the training itself that caused that result.
For example, a typical approach companies use to measure ROI is to survey reps on whether or not they applied their training in closing sales, or how many deals they think they’ve won because of a training. The problem with this approach is that there’s no way to tell if the sale was made because of that specific training or because of something else—maybe the customer was already ripe for the sale, or maybe the rep just lucked upon the right price.
And even if it was certain that the customer had bought because of the new skill the rep learned in training, it’s entirely possible that the rep may have applied the behavior out of sheer luck even if he hadn’t been trained on it.
Thankfully, Automatic Data Processing (ADP)—the payroll and tax processing firm—has discovered a clever way around this problem.
ADP, like a few companies we’ve surveyed in the past, constructs all of its classroom training around actual deals reps are working on out in the field. Before training, each rep chooses an actual account from his/her territory, and uses this account in all exercises and role plays throughout training. That way, when the rep goes back to apply the training in the field, you have a real-world result to measure that comes directly from the training.
Now here’s the kicker: ADP requires that all deals reps bring to training are “stuck” deals—previously called-on accounts with little to no hope of closing. After training, reps then re-approach these accounts using their new skills, flagging them in Salesforce.com.
Ken Powell, VP of Worldwide Sales Enablement at ADP, sums up their methodology nicely:
“Our training is wrapped exclusively around stalled or lost deals. By using actual accounts and active sales opportunities, we are able to immediately impact sales results, closely track the program’s success, and gain more funding.”
The genius of ADP’s approach is that using stuck deals in training automatically controls for almost everything else that could have caused the win. You know nothing else contributed to this deal closure because the rep already tried selling to that account and nothing had worked. But if a rep goes back to that account after training and it suddenly shakes loose, it’s a fair assumption that the training was one of the only variables that changed.
In other words, stuck deals provide a point of reference for measuring movement in the pipeline, rather than trying to mathematically attribute a single cause to a specific result. It’s not how you measure—it’s what you measure.
What do you think of this approach? How do you measure the ROI of your sales training?