D
ata-driven direct response marketing has become increasingly popular in this digital age. But its value to marketers comes from a bias built on a mountain of assumptions and misperceptions.
Before we get bogged down by the incendiary nature of that last statement, let’s instead start addressing this subject by jumping to the end of this discussion.
Data tells you “what”, not “why”.
If you don’t know “why,” then you won’t learn “what” really works.
Okay. Back to the incendiary aspects of this topic. Here’s a typical example of this “what” instead of “why” problem as it normally occurs in a data-driven online ad campaign.
Consider an average display ad that runs in a programmatic ad network. Broad targeting is preferred for this ad because the programmatic system will optimize the audience based on response. In other words, the audience is being refined based on who clicked on the ad and it seeks out specific people that look like those who clicked.
This is actually prejudicial in that it’s based on an assumption that people who clicked on the ad are the right customers and it also assumes or abandons people that didn’t click on the ad as the wrong customers.
In actuality, the most accurate insight you can take from that behavior is not whether they are the right customer or the wrong customer, but rather, their behavior suggests that they are the type of person who is more likely to click on an ad than not.
Furthermore, decisions for this ad campaign are being made based on “what” people did (they clicked) but nothing is learned about “why” they clicked. The focus is on the tiny minority of the market — click through rates for display ads are minuscule fractions of percentage points of a targeted audience that was reached. As far as what constitutes a good click through rate, the average is around 1.91% for search and 0.35% for display.
Think about this. All the effort that goes into campaigns of this nature are optimizing response rates from the smallest minority of the target audience. And no part of a data-driven marketing campaign is directed towards understanding and correcting the behavior of the vast majority (averaging 98.09% – 99.65%) that did not click. Nothing is being learned about “why” people did not click.
And then there is the question of “why” people did click. Data-driven marketing campaigns are indifferent to “why”. They continue to run and optimize until results begin to decline. Then, they run new creative ads against the current campaign until they test a campaign that achieves better click throughs. The winning creative runs until it runs into decline. And so it goes, a model that produces diminishing returns by default.
Why is “Why” so important?
When all of this data-driven activity takes place between the brand and a minority of its marketplace in this fashion, nothing is ever learned about “why” those customers clicked or didn’t click.
Why? Why did they respond? Was it because the brand was well known and trusted? Was it because the offer was attractive? Maybe the creative was likable and inspired their engagement. Perhaps the timing and context of the message triggered an urgent response. Or maybe colors and pictures provoked an emotional reaction.
As a marketer, you deserve to know the answers to these questions. Don’t you think knowing the answers before the ads run and before they are programmatically optimized will produce exponentially better results?
And what about those other customers? “Why” did they not click? This is the area that effective marketers explore before they launch their campaign. It can’t be understood by analyzing click through data.
Understanding the barriers that prevent a change in behavior is the first step to defining an effective marketing program. When evaluating the marketing challenge, we have to understand the context that surrounds the challenge. This is defined as much by these barriers to action and change as it might be defined simply by the tacit benefits the brand offers.
The MVP is NOT your most valued performer.
All of this begs the question, “why are data-driven marketers engaged in such narrowly focused activity?” The answer is called “minimum viable product” (MVP).
MVP is the product of our fast-paced, digital culture that values “going to market fast” over “going to market smart.” And the data-driven marketers are helping to support this behavior by suggesting that these minor movements in the progress needle are actual proof that rapidly getting their MVP into market is really working.
“Business leaders have mistakenly decided that marketing is an efficiency game. According to them, if you’re efficient, you harvest customers.” -Rory Sutherland
These marketers are settling for harvesting customers from the smallest fraction of their customer base because they’re looking for fast results and have bought into the misperception that direct response marketing is both efficient and effective.
The behavioral culprit in this situation is called “anchoring.” The industry average click through rate of 0.35% creates an anchor in the minds of marketers and suggests that 0.35% is the benchmark for success. A small anchor will encourage acceptance of small improvements on that response rate.
Consider this: the first time a marketer is told to expect 0.35% as the response rate for their marketing, they typically are shocked and disgusted and unwilling to believe it could be true. You probably had the same reaction the first time you were exposed to this data. But the anchor is set and when they choose to move ahead into the realm of data-driven direct marketing, they willingly accept the anchor and forever base results on that shallow starting point.
What’s the true value of this data?
Just consider the very definition of data-driven direct response marketing.
Direct response marketing is a type of marketing strategy where the goal is to encourage an immediate response from consumers in order to quickly generate new leads. The response can be any action such as visiting a website, responding to a direct mail offer, making a purchase online or on the phone, or even just sharing a post on social media.
And now consider how low the response rates are in these channels compared to the overall audience being reached and the total size of the marketplace. Finally, consider how misguided marketing decisions are when they’re based on the minority that responds instead of addressing the barriers and interests of the much larger majority.
A data bias has created the problem
Data-driven direct response marketing has a step-sibling that is also receiving a lot of popularity from misguided assumptions and misperceptions. This is due in part to a similar bias towards data and the assumption that data represents truth.
The step-sibling is called database marketing.
Database marketing is a form of direct marketing using databases of customers or potential customers to generate personalized communications in order to promote a product or service for marketing purposes. The method of communication can be any addressable medium, as in direct marketing.
The biggest flaw in this approach is that it relies on a customer database that represents the smallest minority of the marketplace (those that have opted in). And the prospecting database is created by finding lookalikes from that same, tiny opted-in customer database.
So the database marketer puts all their effort into improving response rates from their customers by addressing the smallest minority of their marketplace by their name.
Here is the major kink in data’s armor.
Data science only applies one type of insight to solving marketing problems.
“It’s important to remember that big data all comes from the same place – the past. A new campaigning style, a single rogue variable or a ‘black swan’ event can throw the most perfectly calibrated model into chaos.” -Rory Sutherland
To be an effective marketer, you have to draw insights from as many sources as possible.
“Nothing is so powerful as an insight into human nature… what compulsions drive a person, what instincts dominate their action… if you know these things about a person you can touch them at the core of their being.” -Bill Bernbach
It is in the marketers best interest to mine insights from as many sources as possible (quantitative, qualitative, apocryphal, and external) in order to effectively understand the context of the problem.
“That’s because, in general, better intervention design happens when you have as many potential insights as possible at the beginning of the process—a big, wide funnel of opportunities for behavior change that slowly gets narrower as we hone in on pressures we’re able to successfully design interventions around. The more insights we have to start with and the faster and more thoroughly we can validate them, the more interventions we can design.” -Matt Wallaert
Behavioral marketing versus data-driven marketing.
A behavioral marketer strives first to identify outcomes, understand the context of the problem, and identify barriers to action before exploring creative solutions to affect a change in those behaviors.
By starting with outcomes instead of processes, the most effective marketers understand what people want to do and why they aren’t already doing it, then build campaigns and programs to bridge the gap.
And before they finish, they test their theories for affecting change until they arrive at a reliable solution to take to market. Even then, they need to continually test.
“If we are not continually testing, then we are not continually improving.” -David Ogilvy
These two approaches, data-driven marketing vs behavioral marketing, offer two choices:
You can optimize down with data-driven marketing, turning a small number of responses into an even smaller number of optimized responses.
Or you can optimize up by learning about the customer’s barriers to response, creating interventions to overcome those barriers, and then test your way towards larger and larger volumes of responses.
At Positive Brand, we are big believers in the power of data. We take a scientific approach to marketing that uses data from multiple sources that includes response data from online and offline resources but also considers data from qualitative research, ethnographic, interviews, focus groups, online research, and academic literature.