Martech spending is estimated to hit 122 billion dollars by 2022. Marketing technology is now more than a quarter of marketing budgets. In a year when Salesforce acquired data visualization tool Tableau for nearly 16 billion dollars, it’s impossible to ignore how much money is being spent on technologies that are supposed to help organizations make better marketing spend decisions.
It made sense to start with data visualization in 2004 when digital marketing solutions were brand new, and no one knew which data would be important. And yet today we face a very real problem: the focus on data visualization remains more important than the quality of the data being visualized. This incorrect focus isn’t just cosmetic—it significantly limits the resulting insights for marketing attribution.
Ask almost any marketing executive to list her top three priorities for the coming year. It will be a rare leader who does not mention “getting reliable marketing attribution.” Marketers have known for years that once they capture this elusive attribution information, they’ll be significantly more effective in allocating their limited resources. Every year organizations pour hundreds of millions of dollars into trying to accurately view marketing attribution, in part because the potential ROI is enormous. The goal has been to answer the simple question, “Which of my marketing efforts have the most impact on my marketing goals?” To date, there have only been partial successes.
Attribution and touchpoint data are taken into visualization tools only after they have been collected in each marketing tool’s “proprietary” structure. Only then is there any attempt to rationalize the data from disparate systems and make it consistent. This isn’t only slow—sometimes taking months of effort—but even after this reconciliation and cleansing are completed, significant holes and gaps in the data remain. As the adage goes, “garbage in, garbage out.” It doesn’t matter how slick your visualizations are. A beautiful plant will always die without the proper conditions to feed it.
So, let’s back up and define what it is we’re missing as we gather data. There are four necessary pillars to gather the right data the right way.
How to Ensure You Have the Right Data
There needs to be compatibility with the systems we’re currently running.
Yes, all of them. It has proven insufficient (might I add even unfair and unrealistic) to say, “just use best practices to implement a governance approach within your organization that will keep your data consistent.” No one knows what that really means…and yet everyone goes along with it anyway. This inconsistency leads to significant gaps, and yes, incompatibilities within your systems. Instead, there needs to be a way to receive consistent and complete data from all systems of record: web & mobile; online & offline; paid, owned, and earned; marketing, sales, service/support, and product experience. This compatibility must be a fundamental capability automatically provided by an underlying solution like ObservePoint.
There must be a concept of expandability.
An organization cannot be limited to the five attributes that can be captured using UTM, and yet that’s the reality we live in more than two decades after the introduction of UTM parameters. Spreadsheets can be useful—if somewhat brittle and vulnerable to human error—as a workaround for capturing more attributes, but eventually, every team outgrows such a manual and error-prone process. What we need instead is an approach that is straightforward, simple, and unlimited. There needs to be the ability to capture dozens and dozens of attributes, particularly for the content we’re using so we can understand not simply what channels are performing, but why they’re performing. This complexity needs to be managed by underlying technology. We need to free up some of the best employees we have from tedious data collection and wrangling and use them for analysis that will impact the bottom line.
There must be identifiability.
Despite all the progress made in determining the moves a buyer makes as they progress on the journey to purchase, there are still enormous gaps. Most technologies simply don’t gather enough of the right data and interactions to bring it all together. In order to truly know what’s working and what isn’t, we need to see all the touchpoints an individual (or in the case of B2B, an organization) has throughout the entire sales cycle. This includes the ability to connect all touchpoints that were made while an individual was still anonymous to your systems once they ultimately identify themselves in the sales process.
The solution must include evolvability.
Many technologies now integrate with each other, and if they don’t, they can’t survive. For data to be usable and intuitive, it must allow you to start from where you are to get initial—and trustworthy—answers right away. You should be able to view your attribution results in whatever reporting tool(s) your organization is currently using and then be able to move to more sophisticated approaches as they become available. The same is true of the attribution models you use. Trying to adopt the most sophisticated attribution models before your organization is ready to digest them can be counterproductive. You must be able to move from models that are descriptive to diagnostic to predictive and, eventually, to prescriptive according to organizational readiness.
Ultimately, all four of the above principles are driving toward the same thing: Unified Data. This is the end goal that companies like Salesforce and Google have been trying to achieve as they’ve spent billions of dollars acquiring data visualization platforms. But just because data sets can be presented side-by-side on a pretty dashboard does not mean they are actually unified. This is merely what we call “colocation.” To ensure truly unified data, organizations need complete, consistent data that is defined and standardized from inception—not merely cleansed, normalized, and munged together after data collection. Simply pulling out your power sander doesn’t mean that square peg should ever be forced into that round hole.
Today’s Solutions Don’t Solve the Problem
Many excellent tools are now on the market that help with this attribution problem. Google and Adobe Analytics can be used to display online data. But what about offline? Marketing Automation Tools such as Pardot, Marketo, and Eloqua capture data on suspects and prospects, but it’s hard to connect that data to what is happening during the formal sales process. Salesforce does an admirable job of capturing data once customers and prospects identify themselves, but not before. CDPs are taking the next step beyond CRMs, but they are still focused on gathering data only after a prospect/customer has been identified.
Unfortunately, all of the solutions above are still rationalizing and wrangling incomplete data. This means the billions of dollars spent every year can only provide incomplete insights.
As long as we continue with this approach, we’ll never be able to simultaneously achieve complete success in marketing attribution. But there is a way to do it—a way that will give trustworthy and reliable insights in real-time, every time.
The solution lies in automating a process to enforce a consistent data structure before any data is actually collected and then automatically using that structure across every marketing, sales, and support tool within your stack. Once this structure is in place, the possibilities are limitless and the results in the millions of dollars of financial impact.
To see how Strala by ObservePoint can save you time and do it for you, schedule a demo.