Testing, testing...1,2,3

11/03/2008 09:27:00 AM


Last week we heard tips from Analytics Evangelist Avinash Kaushik on how to measure the in-store impact resulting from online investments. Today we are continuing in this vein and discussing ways for retailers to test the online-offline impact for themselves.

The average American spends as much time on the Internet as they do watching TV (31 percent of total media consumption is online, Source: Jupiter Research/Ipsos Insight Entertainment and Media Consumer Survey, 08/07, US online consumers only), and yet advertisers only spend around 6 percent of their total media spend on online advertising. One big reason for this, at least in the retailing world, has been the difficulty of showing how online advertising affects consumer purchasing behavior in stores.

According to a study done last year by Yahoo! and comScore, 89 percent of consumers look for information about products online. While selling products online is a great source of revenue, perhaps the true value of online advertising is its effect on what consumers do in store. So how can you figure out the effect that your online advertising has on offline purchases?

One of the most common methods is to look at the individual consumer. This can be done through online coupons, credit card tracking, individual surveys, or Nielsen's HomeScan tracking data (where people who opt in, physically scan products they buy using a bar code reader in their home). Although the individual consumer method can surely provide interesting results, it is often too difficult and too expensive for most advertisers.

Another approach that seems to be more scalable is randomized test and control, which is similar to the method used for FDA drug testing. The three basic elements are designing the test, running the rest, and analyzing the results. The design is perhaps the most important element, because if your design is poor the results will be meaningless. Although this method can have dozens of variations, I'll give a simplified over view of the design to get you thinking about how you can measure the online/offline impact.

First, you would choose cells, each with multiple metropolitan test areas. The test areas should have similar attributes – population, sales, etc., and each cell would be exposed to different variations of advertising. Here is one possible design:

Cell 1: No online ads in test product categories

Cell 2: Existing online ads, no change from campaigns prior to test

Cell 3: Max out Google Search and max out potential keywords, but use no Display

Cell 4: Max out Google Search and layer on Display ads to boost reach

A few things to consider are the timing, the product categories that are tested, the number of test areas, and the spend level. You should try to avoid testing when there is a lot of other noise, such as seasonal promotions and holidays. The product categories can also have a big effect on the test. If you choose categories that typically have big revenue numbers it is more difficult to see changes caused by your test, so it is usually better to pick categories with smaller sales numbers. And lastly, the number of test areas and the amount spent on the test go hand-in-hand. If you choose too many test areas your testing dollars will be spread very thin and it will be difficult to make an impact. But if you choose too few test areas the test won't be statistically significant. So it is important to pick a middle ground that allows you to put a significant amount of testing dollars into each test area.

Online product research has become an integral part of the consumer shopping experience, and your online presence surely contributes significantly to your ROI beyond what is sold online.

As Jeff Smith of Accenture Retail said, "Instead of replacing bricks and mortar stores, the Internet is an extension of consumers' in-store shopping experience providing a resource to research product and price. Retailers and manufacturers must understand this consumer behavior trend in order to reach shoppers, educate them, serve them and earn their loyalty."

Now you just have to test it for yourself.


Big Daddy Sexton said...

Absolutely agree! We use locally-targetted AdWords to enhance our local print advertising, and track via coupons and customer feedback. One comment, though:

"You should try to avoid testing when there is a lot of other noise, such as seasonal promotions and holidays"

Unfortunately, all there seems to be is noise these days! Especially when you are trying to coordinate a national campaign with online promotions and local campaigns with local promotions. It takes a lot of resouces just sorting out the data by itself, much less the data from the noise! Worth trying, though.