Now is the time to start a business according to Seth Godin. I was reading through the last several posts on his website and came across this one. I would like to build upon his idea. Now is also the time for strong companies to leap frog over their competitors through strategic expansion and investment. In this market, there are opportunities to purchase undervalued assets. Think JP Morgan Chase’s purchase of Bear Stearns. There are opportunities to strategically invest. I heard Warren Buffett speak 5 or 6 years ago about a decision early in his career to purchase IBM shares at a time when he believed the stock was undervalued. Since then he had seen an incredible return on his investment. I am betting his investment in Goldman Sachs will also prove his reputation as a savvy investor. By purchasing perpetual preferred stock he receives a 10% dividend. Don’t forget the opportunities to invest internally. A few weeks ago I wrote about iKnowtion which is investing in their staff and their future.
I was catching up on Tangyslice’s blog and enjoying his 5 meaningless marketing metrics post, when I thought of another meaningless metric. Last week I was reading a presentation which described the response rate of one group as slightly greater than the control group. What does slightly greater mean in this context? Well, it turns out the difference was statistically significant once I did the math. What I find meaningless is when analysts do not look for statistical significance when comparing two groups. This is known as A/B Testing.
Conceptually A/B testing is very simple. You are comparing Group A to Group B. A might be a control group and B the test group. Alternatively, A and B might be two different offers, landing pages, e-mails, direct mail lists, or landing pages. As the name suggests, this is a test which is why A/B testing is also known as split testing. Ultimately, you want to know if A and B differ in a way that is statistically significant.
Here’s an example to make it concrete. Let’s say that you marketed to 50,000 customers encouraging them to purchase product A and 5,000 of them responded. That is a 10% response rate. In addition, there were 5,000 customers that you could have marketed to but that you did not. Instead, you assigned them to the control group. They look and act just like the 50,000 customers that you mailed. The reason for the control group is that some customers might buy product A regardless of whether you market to them or not. In this example, 450 of them or 9% purchased the product. Is the difference between 10% and 9% statistically significant? Was the campaign successful?
In this case, we perform the two-proportion z-test for equal variances using the following formula:and
p1=10% (response rate for Group A)
p2=9% (response rate for Group B)
x1=5,000 (number of responders in Group A)
X2=450 (number of responders in Group B)
n1=50,000 (quantity mailed in Group A)
n2=5,000 (quantity mailed in Group B)
If the value of z is greater than 1.96 then the difference is significant at 95% confidence. In this case, the z value is 2.26 so the difference is statistically significant.
In order for the test to be valid a few assumptions must be met:
1. Your control group needs to contain customers or prospects that look and behave like the treatment group
2. You need to have sufficient numbers of direct mail recipients and responders such that n1 p1 > 5 AND n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and n2>29 and the groups contain independent observations
The math might look scary but really the hard part is making sure that the test is done properly. It is vital that the control contains a random selection of customers who are similar to the treatment group. If not, you could end up with very strange results
Last week I attended a Marketing Analytics conference in Boston sponsored by iKnowtion. At a time when companies are cutting expenses, including staff and marketing budgets, iKnowtion is investing in their future. They are also engaging in a dialogue with the larger Marketing Analytics community through the conference and their blog. (In the interest of full disclosure, I know several people at iKnowtion but have never worked there.)
The conference began with a talk by Tom Davenport, the Author of Competing on Analytics with Jeanne G. Harris. He set the stage by providing examples of how companies recognize the importance of analytics but reminded us that marketing is still a combination of art and science. As the emphasis shifts more towards the science of marketing, we need to recognize that the “art” is still relevant. He further challenged us to move beyond reporting to provide more value and insight.
Next was a panel on driving business value featuring speakers from GM, CVS Pharmacy, and ConstantContact. Each speaker provided a brief case study of how analytics has helped their business. In one case, analytics changed the focus of the business. In another, it led to the rebalancing of product marketing. Finally, the rigors of “test, measure, and learn” enabled one company to optimize media effectiveness across channels.
After lunch there was a lively digital panel discussion around social media, the future of web-enabled communities and the challenges of measuring the impact of companies’ efforts in this space. Given the evolving nature of social media, it is no surprise that there were divergent opinions. I, for one, appreciated the candor and the healthy discussion that ensued. To quote Jane Austen, “My idea of good company…is the company of clever, well-informed people, who have a great deal of conversation.”
The conference wrapped up with a return to the theme of competing on analytics. This free flowing discussion touched upon a range of topics, including how to become a company that uses analytics for competitive advantage. Interestingly, one of the panelists thought that finding good talent was the biggest challenge we face. As a Marketing Analytics professional who hires and develops staff, I am in complete agreement. There is stiff competition for the best analytic staff and I have found it difficult to find technical competence coupled with business acumen. In fact, the discussion about finding, training and retaining analytic staff continued at the bar, after the conference formally ended.
iKnowtion has plans to hold the conference again next year and I encourage you to attend.
As much as I like to measure and quantify, it is too early to try to assign a dollar value to social media. Marketers are still trying to figure out how to use social media effectively. Until they do, they will not be able to measure the impact of social media on their efforts and their brands more generally.
If you are interested, Limeduck has posted about a tweet up held by a Boston radio station, WBUR, to explore and discuss social media. Also, Tangyslice has also begun interviewing “real people using Web 2.0 to improve the way they do business”.
I was asked recently how to prioritize new customers if you do not have demographic or firmographic data available. In other words, what can you do with just the data from the first purchase with which to work?
To make this more concrete, let’s consider the following situation. You are asked to call each and every new customer who has made a purchase. The question is, how do you prioritize the calls? You want to make the first calls to those with the greatest potential to become loyal and valuable customers. The only data available relates to the first purchase: total revenue generated, products purchased, product revenue, etc.
In this case, a linear regression could be used to help you identify the factors that predict lifetime value. (Other types of models can be used depending on the independent and dependent variables available.) Using your existing customer base, build a model that leverages data about the first purchase to predict lifetime spending. You can identify the best and worst new customers using the resulting model equation. Armed with this insight, you can test your model by calling on new customers with the best predicted lifetime revenue and a random selection of new customers regardless of predicted lifetime revenue. In addition, you can test call back timing to determine if there is an optimal call back window.
Even with limited data, analysis can lead to insight. Further, there is always an opportunity to incorporate testing. In this case, testing can validate initial findings and help you learn more about the purchase cycle.
If you have been reading Limeduck like I have, you might have read about a Rockport print ad showing shoes that are not available for purchase. As I noted on that website, this has happened before. In a Marketing class several years ago, my professor showed a television ad featuring a car that could not be purchased. As you can imagine, consumers saw the car and went to their local dealerships looking for that car only to learn that it wasn’t available. Given that high profile mistake, I am surprised that Rockport made the same gaffe.
And yet, just today I received an e-mail that I wanted to share. In an earlier post, I talked about how some e-mail programs do not load images. This was in the context of measuring the open rate of an e-mail. However, the fact that some e-mail software turns images off by default also affects the look and feel of an e-mail. Here’s what the e-mail looked like:
In the image above, pictures have been replaced by boxes featuring red squares, blue triangles and green circles. All of the time spent crafting a beautifully designed e-mail is lost if recipients cannot quickly read about the offer(s) and easily engage with the e-mail. I certainly did not bother to display the images in this e-mail.
Do you spend a lot of time on YouTube? If so, you may have already seen this but I was surprised to see the MC Hammer video on YouTube. Am I the only one surprised to hear the words “behavioral targeting” being spoken by MC Hammer? Who knew that MC Hammer and I would have something in common. We both believe that analytics enables you to allocate your marketing dollars effectively.
If you haven’t seen it, watch MC Hammer on Analytics from YouTube.
Clients ask me about e-mail open rates and, honestly, they are not what they used to be. In fact, they no longer matter for many reasons but here are my top three:
1. False negatives. An e-mail is considered opened when a tracking image is downloaded. However, major e-mail clients like gmail disable images by default. If you read the e-mail with the images disabled, it will never be counted as an open. And what about text e-mails? They do not include images and thus do not count as opens unless you click on a link (and even that might be e-mail software dependent).
2. False positives. Let’s assume that images are enabled. E-mails displayed in a preview pane are considered opened because the images were downloaded. But who always reads the e-mails in their preview pane? I don’t and I bet you don’t either. So you have undercounting due to the disablement of images and overcounting due to the use of preview panes.
3. What really matters is the action taken. To me, the true success of an e-mail marketing campaign is whether you drove the desired action. Did you sell more widgets as a result of the e-mail campaign? If not, the open rate is moot.
The United States Postal Service has announced its third quarter results for Fiscal Year 2008. It reported a 5.5% decrease in mail volume for the same period last year. Mail volume in the quarter was 48.5 billion pieces. First-Class Mail and Standard Mail also fell 5.5 percent from the third quarter of FY2007.