Everywhere we turn, we're being bombarded by propaganda from the major advertising platforms: "You need to move beyond Last Click attribution!" Google is always pushing the Data-Driven attribution model, and that certainly sounds good. But what's the truth here? What's going on behind the curtain, and should you make the jump to a different attribution model?
We're always using some attribution model. Most people just don't realize it for a while. Eventually they hear about the term "Last Click" attribution and realize that's just one of many possible ways to allocate credit for a conversion. But then as you dig into the other options available, and the "best practices" that are being promoted, it can quickly get confusing.
Does It Even Matter?
Every attribution model is just a method of allocating some credit for conversions and conversion value between the various touchpoints that are tracked. The first question to ask yourself is, Why do I care how conversion value is allocated to various touchpoints?
The thing you need to remember is that it doesn't change what your sales are. It just changes your reports to nudge up which sales are over here versus which sales are over there. And the bottom line about all reports is this: If you don't use the information to make different decisions than you otherwise would have, it really doesn't matter. If that report doesn't translate into action, it's just moving numbers from one column to another.
Perhaps You Should Just Stick with Last Click
If you're not using the differences in conversion tracking to drive different actions, you should probably just stick with Last Click attribution. Why? Because it makes sense. It's basically the de facto standard that everyone has used historically. It's even based on the de facto standard that marketers used before the internet. Basically, it gives you a really good idea of what finally tipped someone from being a non-customer over into being a customer. And that, frankly, is extremely valuable information. If you never get more sophisticated than that, you can run a successful business and make great marketing decisions for decades to come, just understanding that.
And hey, if you need to dig deeper into the various touchpoints that occurred before the sale, those are all actually still tracked. You can easily figure that all out in Google Ads, Facebook, Analytics, etc. The information on conversion paths and funnels is all in there. Is there really a reason to go beyond all those reports and actually engage in shifting the reports about the actual final sale around?
Keep it simple, and you will always understand what your reports are actually telling you. And you'll always understand the biases in the data.
Or Maybe It Does Matter?
The promise of the multi-touch attribution models is that, by spreading out the conversion value across various touchpoints, the data about the value of those touchpoints can be integrated into decisions about where to spend money.
On its face, this actually sounds really good. If someone saw a display ad three weeks ago, that might have moved them closer to purchasing with us, and giving that display ad some credit may help us make some better decisions about where to allocate our ad spend across a complex account. Sure, we could still get a report in the ad platform about this, but perhaps it's nice to actually have a share of that sale automatically allocated to that ad, so it's easier to run the numbers on balancing the spend vs. the results.
Theoretically, if we could accurately allocate conversion values to prior touchpoints based on how they proportionately supported later sales, this would be gold. If we could do that with accuracy, it would dramatically improve the efficiency of advertising, while also helping us reduce all the spam that people don't want to see.
But for now, there is ZERO evidence that we are capable of this.
Correlation Does Not Imply Causation
But what's wrong with this? First off, it's not at all obvious how much credit to give those prior touchpoints. This is why there is a proliferation of various attribution models—nobody can agree on what to use. And as various ecommerce merchants make arbitrary decisions about the model to use, it's not obvious they are actually driving higher ad performance. Besides the issue of perhaps allocating the wrong amount of credit to previous touchpoints, there's another issue that it's not obvious that various touchpoints actually had any causal relationship to the final sale at all.
Even when using a “Data-Driven” model that purports to allocate credit based on what is statistically shown to drive future sales, this is actually getting into some pretty shaky territory. The math being used is capable of showing which touchpoints were more correlated with future sales, but to say that these touchpoints caused future sales is an abuse of statistics. Remember that old adage that correlation is not causation? Attribution models are becoming an egregious example of people forgetting this point.
What we are doing is replacing one flawed model (Last Click) with other flawed models.
Garbage In, Garbage Out
Machine learning is going to save us, right? Nope!
Machine learning is being used to train automated bidding strategies that can optimize for more conversions, more clicks, more conversion value, a given ROAS, etc. Unfortunately, every one of the existing "smart bidding" algorithms has some fairly serious flaws for ecommerce merchants who want to optimize for increased profitability. None of these systems can deliver that currently.
The promise here is that if we can feed in more accurate data about touchpoints driving conversions into these systems, they can crunch the numbers and deliver better results. But as we make arbitrary decisions about how to skew the attribution data, and then feed those data into systems that are optimizing for non-profit objectives, it even further removes the output of these systems from any link to reality. It doesn't matter how well a system can optimize for clicks if we're feeding it garbage data and what we really want to optimize for is profit.
These factors are creating a reality where many ecommerce merchants are putting their faith in deeply flawed systems built on unstable foundations. The output of these systems is often far worse than what can be achieved by people who know what they are doing.
For an in-depth explanation about the biases of automated bidding algorithms and how they drive unprofitable performance, check out All The Ways Smart Bidding Is Dumb.
We hope these systems will improve in the future, but right now, they are not up to the challenge of modern ecommerce.
What About Unmeasurable Touchpoints?
One of the major factors that makes multi-touch attribution models inaccurate is that they can only spread conversion data across touchpoints that they can measure. Consider a more complex business with salespeople, attendance at tradeshows, direct mail, television spots, and so on. No attribution model is capable of capturing anything about these touchpoints. Instead, the model only considers touchpoints like ads viewed, ads clicked on, organic clicks, etc. If it didn't happen on the internet, it didn't happen.
And these systems can't even measure everything that happened on the internet. They don't measure that email exchange with your customer service agent before the sale. Or checking out reviews of your company on that third-party review site.
And here's the dirty little secret about the attribution models available within the ad platforms: they only know about the touchpoints within that particular ad platform. When you select a Data-Driven attribution model in Google Ads, it only knows about ad touchpoints. It doesn't even take organic clicks into account.
When the conversion value is spread across a number of ad touchpoints, you need to understand that it's likely overestimating the true economic value of every touchpoint.
This illustrates one of the major problems with all this data available to us. It lures us into thinking that everything is measurable, everything is precise, and that we actually know what is going on. That couldn't be further from the truth. Calculating some metric out to 10 decimal places doesn't improve the accuracy of our knowledge when we're calculating it with flawed data based on flawed assumptions.
It's good to stay humble and realize we often know far less than we think we do.
What should we do?
I think we should all take a breath, accept that changing attribution models does not improve the accuracy of our knowledge or decisions, and not worry about it all that much.
Last Click attribution has flaws. But at least we understand the biases in that method, and we can take those into account when making our decisions. If you're using another attribution method, that's fine. Just make sure you understand the biases that method is going to have.
Just don't think that you know more than you do, and be careful about turning your thinking over to an automated system.
Anyone trying to cram attribution model changes on you has their own particular motives for doing so, and it would be worth your time to stop and consider how they stand to benefit from the change they're promoting. Perhaps they're trying to get you to value more upper-funnel ads so you expand into their display network, for example.
This is an area that's rapidly evolving. In a few more years, the tools available to us will have improved even more. But for now, you're very unlikely to see any tangible benefit from one attribution model vs. another. They're all bad. They all have biases. They all drive bad decision making. It's the Wild West. And until things actually improve quite a bit more, you might just be better off sticking to what you can actually measure and understand. There’s nothing wrong with Last Click attribution.
Psyberware specializes in managing online advertising for ecommerce businesses, and nothing else. If you want to build a great relationship with a group of dedicated people who really understand ecommerce, get in touch with us.