Friday, January 16, 2009

Looking for an Outcome - Testing in B2B


I was watching Tamara Gielen's video of Jim Sterne from the EIS keynote (http://www.b2bemailmarketing.com/2008/12/eis-keynote-cro.html), and it got me thinking about the challenges of testing in B2B. It's a tough exercise because of one fundamental challenge. Testing of marketing involves defining an outcome you're looking for so that you can say that option A did a better job than option B of driving that outcome. The trouble with B2B marketing is that those outcomes make a quick jump from irrelevant to immeasurable.


What do I mean by that?


Well, in B2C marketing, you can often define the outcome of a marketing campaign as purchase revenue. Send an email, observe how many people buy and how much they spend. You can then test against that outcome to see which copy, creative, subject, or list drives more of it and that's your best marketing option.


In B2B marketing, your sales cycles are often much longer - months if not quarters or years - and the sales cycle is often concluded off-line by a sales rep getting a signature on a contract. This means that any testing you might want to do against the ideal results - the driving of revenue - are both extremely difficult to tie together, and require more time to elapse than is practical (if you have to wait 3 months to see enough results to determine which marketing campaign to launch, you've likely missed your window).


Similarly, if you look at things that can be easily measured in B2B, they are usually not significant enough to guide decisions; opens, clickthroughs, and form submits are not great indicators of revenue. If you are testing against which campaign drives more of these activities, you are likely going to find that free iPod giveaways perform fairly well, but as we've all seen, they are not likely to turn into good leads for your sales team to follow up with.


Luckily, there is an interim outcome that you can test against, does correspond to revenue potential, and is quickly determined; the qualified lead. With a definition in place of what a qualified lead is, you now have a measurement of what outcome your campaigns are trying for.


Note that what we're talking about here is leads qualified based on their interest (implicit scoring) rather than who they are (explicit scoring), as your marketing campaigns are unlikely to change the titles or industries of your audience. (for a deeper discussion on dimensions of scoring, see here:

http://digitalbodylanguage.blogspot.com/2008/12/dimensions-of-lead-scoring.html)


The advantage of this approach is that it can test more than just a single-point communication such as an email. In B2B marketing, we are often looking to test one sequence of communications against another (let's say it's a 3 step program for post-webinar follow-up and we're testing two different messaging options, or an all-email version against a multi-channel version using email, direct mail, and voice). If you're doing that, you have even more need to use an abstracted outcome like qualified leads to be able to look at the overall effect of one sequence over another.


I look forward to you comments on what has and has not worked in your testing efforts against longer sales cycles.
BOOK
Many of the topics on this blog are discussed in more detail in my book Digital Body Language
SOFTWARE
In my day job, I am with Eloqua, the marketing automation software used by the worlds best marketers
EVENTS
Come talk with me or one of my colleagues at a live event, or join in on a webinar

1 comments:

mkamrk said...

Great post. I think, though, that it is more important to test the activity generated by a lead over a period of time than just within a certain program to quality or score a lead. The program effectiveness can be determined based on a particular campaign's effectiveness on the overall activity stream of the lead. Tools like Eloqua, Marketo and NurtureHQ.com can help in doing this.