No matter how savvy a marketer or writer you may be, it’s absolutely impossible to accurately and consistently predict the market response to a promotion.
Obviously, the more experienced you are, the easier it is to spot glaring marketing mistakes. That’s pretty easy. What is not so easy is predicting how well your target market will receive your message – or how they will perceive it.
This is where testing comes in as an invaluable aid and precaution.
I have to admit that it’s not often I have the luxury of testing an approach. Firstly, testing can be quite expensive, and not many clients really understand the value. If you’re testing an ad, you really need to run the various versions of the ad in the same publication at the same time in order to determine which approach works best. The reason is that then the dynamics are comparable – same day, same time, same audience e.t.c.
Ad testing is usually done by ‘split’ testing and usually, only larger, national publications offer this facility. This means that they will publish each version in different ‘runs’ of the publication. For example, they may have an A/B split run where 50% of their publication will run one version of your ad and the other 50% will run the other version.
By placing various identification mechanisms in the ads, you’re then able to determine exactly which ad creates which response.
On the internet, you can run split testing for no cost at all by utilizing specific Google tools set up for this purpose. Free, but a little tricky, technically if you are doing this yourself and are not technically minded.
Recently, I was faced with a dilemma where, for a number of reasons, I really needed to test before an ad ran – one of these being a hot dispute between me and the client regarding which was the best approach.
With no money and no time to do a professional test, and not wanting the ad to run and perform badly – in which case I’d get blamed even though the client may have gone against my advice – I decided to do a ‘grass roots’ test of my own.
I printed out the two ad versions and either emailed or hand delivered them to people I personally knew who fitted the specific target demographic.
Without giving any indication as to which I thought was better, I simply asked them to tell me which ad caught their attention and why – as well as whether they would respond to it if they saw it in the newspaper and why – or if they wouldn’t, why not?
Immediately, without any hesitation, they all selected the same ad. What was interesting was that they all had different reasons for selecting it. Some simply said it was ‘more catchy’. Other’s said the headline peaked their curiosity. Others said it looked easy to read.
In contrast the other ad was perceived as ‘boring’, ‘too much work’, ‘too much copy’.
The amazing thing was the ad they’d chosen actually had more copy than the ad they didn’t like. It was clearly laid out with bulleted points and it was easy to get the message by simply scanning it quickly.
The ad they liked was in a ‘story’ form with the fact interwoven through the story.
But somehow the perception was that it was ‘less’ copy, and easier to read.
The client was very much in favor of the shorter, bulleted copy ad with the facts presented up front. When they were presented with the results of the informal testing, they decided to go with the ad which had been chosen by the market. But without that testing, the ad that was found to be not appealing would’ve run – costing them money without producing a good return.
This exercise was a lesson to me. As a result, I’ve made a decision to do informal testing whenever possible. I believe it will save my clients money and the feedback I receive from the market will help me hone my approach, wording and offers in a way that nothing else could.
Informal testing is a great tool – it’s easy, quick and free – and it produces very valuable insights into the minds of those to whom I’m speaking.