Preparing Your Data and Your Organization for Attribution

July 16, 2020

The success of your program is dependent on the quality of your data. Campaign data is messy and some concessions will need to be made when it comes to your data collection and analytics, but that doesn't apply to data integrity. This session will discuss strategies to ensure that you have quality data you trust without losing your sanity.



David Kirschner (00:07):
Hello, and thank you for joining the Marketing Attribution Symposium, and thank you to our sponsors at ObservePoint for putting this valuable seminar on. My name is David Kirschner. It is a pleasure to be back together with some of my peers. I'm a former Omniture and Adobe guy, and it's wonderful to be reunited with some of those folks. Today I'll be speaking about how to get your data and your organization ready to be successful in attribution, because as most of you have probably already realized, it's not a flip the switch kind of program. So to talk about my favorite topic—me—a little bit more, let's, review, shall we? So we always talk about customer journeys. I'd like to say I'm a customer journey man. I've been doing web analytics since the nineties— last millennium, but the relevant part of it begins in 2000. And all these logos I realize are no longer the logos.

David Kirschner (01:05):
These are the then appropriate logos for these companies. I started as a CRM director, back in Miami, Florida for hotels.com, it was then called 1-800-96-hotels. And there, I got to really run most of the online business, everything from advertising to analytics to affiliates. It was a really exciting time. From there I met the folks from Omniture and started, in 2005, working remotely as an independent principal consultant. Rather, eventually I wound up leading the Americus consulting group going on with Adobe and really having some of the same customer journey challenges in advising customers that we saw at hotels.com. So while the technology had improved greatly, we still really weren't able to deliver the types of experiences customers wanted. Enter Google. In 2013, Google tapped me to essentially replicate what I'd done at Omniture, building a professional services group, but this one was going to be focused solely on attribution.

David Kirschner (02:07):
Recognizing that attribution was 10 times more challenging than any other analytics problem they'd solved, they made an acquisition in this space, a company called Adometry. So my work at Google largely consisted of onboarding this new company, helping position their services. And, I've worked in both a presales and post-sales role with a lot of the fortune 25 companies. What they call at Google the tip of the sphere. There I really learned the importance of change management and how change management is crucial to a good attribution program. I'll talk about that a little bit more later. More recently, I joined Zoosk as their VP of analytics and data. Zoosk broke my heart because they were acquired, shortly after I started. And I've been doing independent work ever since, consulting on attribution, marketing analytics, et cetera. So that's me in a nutshell.

David Kirschner (03:01):
So what have I learned? Quite a bit, but starting with data, as you've already heard in the campaign piece, data's a messy business. And a lot of people would come to you with data, would come to me with data, when I was at Google and say, is this good enough? Can we do attribution with this? And generally the answer is no. Good enough is not good enough for attribution. And think about it this way, the data quality, the veracity and depth and breadth of your data is the limiter. That's the string that's limiting the kite of your program. I really love what, Boar's Head meats, their slogan "Sacrifice Elsewhere." That absolutely pertains to data quality and hygiene. And you also have to recognize there are going to be blind spots in your data, things like an outage, a system went down, cost data is always incorrect.

David Kirschner (03:56):
Is it the book rate? Is it the discounted rate? Was it a make good? I've seen some very interesting naming conventions that sometimes make it difficult where an organization thinks they're not tracking a certain metric and they are, it's just called something ridiculous. So while dirty data is a problem, it is not a showstopper, and it's fair to say that no company with whom I worked had perfect data, even the ones that had really excellent handles on their data, had a governance program in place, still struggled with things like cost data and third party sources of data. Another thing that I saw all the time is system bloat. You've probably seen this quote almost as much as you've seen the mis-attributed John Wanamaker quote. This one is also misattributed. As a San Diegoan I have to correct the record. This was in the San Diego union a century ago, and it's true that the man with one watch knows what time it is.

David Kirschner (04:54):
A man with two watches is never quite sure. And that pertains to keeping a system of record. You need one source of truth for each metric, and you really need to, before you embark on your program, identify what those are. When you're identifying what those systems are, there's also a few considerations that you need to keep in mind. It doesn't have to be the same system for all metrics. Ideally, the systems that are your source of truth for each will port into a repository, whether it's a data lake or a database or whatever it is that your people are querying. But the sources don't need to be from the same source. They do, however, need to follow the following criteria, they need to be reliable. You need to have confidence that it is recording the event correctly. Every time availability ties into that.

David Kirschner (05:49):
But in the event that you're using third party technology to track some of your events, you need to consider that because they don't always work. There's also the total cost of ownership in terms of build or buy and privacy, which is becoming more and more important, especially in the attribution world, as there's more and more legislation introduced globally around consumer privacy. So once you have all of these metrics, where do they go? They go into the ether and everybody forgets them, right? No, they go into a data dictionary. It's really important to create a data dictionary. It's probably one of the least sexy things I've ever done in my business career, but the most valuable. So what is a data dictionary? It doesn't look like that book on the left. Usually, it's either a Wiki, an online Wiki. It can even be an Excel spreadsheet, but what it needs to connect is what each metric is.

David Kirschner (06:44):
What's the definition of that metric? Why is that metric important? Where's that metric stored? What's it used for? Who has access to it? Et cetera. You can't over-document your data dictionary, in times of turnover, in times of staff being out or working remotely, as many of us are, it's really important to have that record of what things are, what they do and why they've changed. I always like to also keep that record of any capture anomalies. There's outages, there's dark periods, you know, and a lot of advertisers will go dark on a specific channel or publisher. All of that needs to be captured so that when you are looking at the data months or years from now, you have that context. That's so important to consider—the context is key, especially in attribution. And you need to document all these processes.

David Kirschner (07:42):
The processes I'm talking about here are the processes of data collection, data federation, data syndication, getting data to the systems and people who can act upon it. When you're documenting that, think about it from the point of view of a user persona. As a user of your product or site or app, how will I paginate through it? What will my visitation process look like, et cetera. If you put it in the user's point of view, rather than the company's point of view, it just makes it much easier to drill down on potential problems. Another thing I have learned is simplify, take a deep breath. We saw this back when I was at Omniture and people were excited about all of the information that they could now collect, and started collecting all that information only to find that frankly, a lot of it isn't valuable.

David Kirschner (08:37):
A lot of it isn't going to help you as an attribution program to reallocate marketing expenses to more efficient channels. So, you know, it's very difficult to consider what you need to affect desired outcomes, because it's the classic case of you don't know what you don't know when you start an attribution program. If you knew that, you wouldn't need to have an attribution program. But, you know, you'd have to think about what inputs are going to be the most telling—closing the blind spots you have now. Things like, device hopping, channel switching— whatever ITP loveliness Apple is putting into the marketplace this week. All of those things are stuff that you need to consider. Apple—I will just double click on that for a second, because it is increasingly getting dark out there.

David Kirschner (09:29):
They are very privacy focused—to their credit. But it does make it difficult for marketers who are used to using probabilistic, exact data rather than—I mean, deterministic rather than probabilistic data—which is extrapolated. In many cases we are having to use probabilistic or extrapolated data to capture the activity of iOS users, which creates more challenges for an attribution program. The other thing is maintaining some type of a query library. I've seen many programs where the analysts are banging away at the data source with queries, many of which have been run before, but haven't been documented or stored. Keep the query library in the same kind of condition that your data dictionary is in. If your queries don't run quickly, they're not going to be useful. So you have to also consider—what is the frequency that you want for your data and attribution model refreshes?

David Kirschner (10:31):
You know, these can be time consuming and expensive. So you really do have to plan ahead, and the more frequently that you do update it, the greater the chance for errors to occur. You know, if you don't have the ability to make changes to your programs in flight and midday, you probably don't need a multi times a day refresh. That's probably overkill. Another thing that you have to consider here is do you want to use a flat fee or a subscription based tool? If you're using an external query source, more and more the use based tools are becoming popular and are something I would recommend for most clients who are starting out and don't have a huge infrastructure internally. You know, it's also important to understand the physical journey that your data takes, where your data goes through the pipelines, under the sea, bounces off satellites, et cetera.

David Kirschner (11:26):
This is important for a few reasons. One, we're in an era of new global privacy regulations that can impact a company, both financially and reputationally. You have to future proof your program to the degree possible, thinking about the shifts that are going to take place in channel usage, surveying your customers on their preferred communications channels. But ultimately the data journey will come from your program results. The goals that you reach and attain will help you simplify, consider the data that's needed and perhaps in future iterations even cut down some of the data you're collecting to focus on the most important, actionable pieces. So a lot of these things sound easy. They are not easy. You know, it's like one small step for man, one giant step for your attribution program, but these are some tips that I have found that work time and time again.

David Kirschner (12:28):
You need to create some type of a steering committee, and I know that that's something that ObservePoint recommends as well. A committee focused on attribution is a little bit unique in that attribution really does touch every part of the organization. You know, from the customer service folks to the marketers, obviously, but also finance. Finance has a vested interest in seeing your attribution program succeed because it means more money for them. I always ensure that there's cross functional representation, the leaders of each group, certainly the leaders of each cost center, but also make sure that the CRO, CFO, whatever the head financial officer is—if they are available, they absolutely should be locked in for two reasons. One, you can prove value to them when this program comes up for renewal. They understand what it does. They understand the impact that it had on the bottom line Two, CFO's don't change as often as say, CMOs, do. Some companies change CMOs like they change their underwear.

David Kirschner (13:36):
CFOs usually have a longer tenure and often are, in some cases, a bedrock of the institution, especially for these larger organizations that I've worked with. This group needs to have some type of a charter, some type of goals and regular meetings where you get together and talk about what's happened in your area of the business, what's new from a data perspective, privacy, any learnings or mistakes that you've made. I found that this was actually the most valuable. Most of these teams don't have something like this. Most companies don't have something like this. And so through the work that we were doing on attribution, groups were getting together that hadn't previously discussed this sort of thing, and it led to all kinds of fascinating discussions and collaboration. The other thing that you need to do is establish a really rigorous QA process.

David Kirschner (14:26):
Going back to that first point about the data being key, the most important thing, the string that limits your kite, you need to build a scalable, repeatable process to get all of your data in, stored, and available. What we did at Google, I can't go into the details, but suffice to say it was a major undertaking. We didn't have just a couple of people working on it. We actually stood up a team. Some of it was on shore. Some of it was off shore, which actually worked very well in that the onshore team could do some work, and task the offshore team to work while they were sleeping, and we had almost a 24 hour process in place. It's not something that a lot of organizations have the bandwidth or the manpower to set up, but there needs to be something that is set up that is repeatable, scalable, documented, and extendable.

David Kirschner (15:23):
And what I mean by that is none of us work solely within our own four walls. We all have partners that we need to train on our systems. And here's where the rigor becomes extra important. When you're working with third party data sources, demand the same rigor from them as you have in your own systems. For instance, a lot of data sources, I won't mention names. They use the same sources of data. I mean, there's only so many cookies in the global cookie pool. There's only so many ideas out there. A lot of the time they have very similar data. So the question is, how often is it refreshed? How often is it scrubbed? What procedures are they using on their end to confirm that the data is correct, actionable, et cetera. So demand that same rigor and train everybody on your process once you have it in place.

David Kirschner (16:14):
Then the experimentation, this is the fun part. You know, we recognize that all models have bias. We like to say that the best attribution model is the one that's the least wrong, and that's true. The only way you can really determine which one is the least wrong now is to have the right kind of culture in place, and that is a testing culture. Again, I know that's something that is espoused by the folks at ObservePoint, and certainly my former colleagues, but having a culture where failure is okay, as long as you do it once. You know, that wins are celebrated, that there's consistency in the program. Having a dedicated budget is also important. One thing that people don't think about when they're entering into attribution is, well, how are we going to test the model?

David Kirschner (17:05):
And the only way really to test the model is to spend money. And here's where there's an opportunity as well. There's a lot of—if you're looking for third party assistance, especially third party assistance from a seller of advertising, there may be an opportunity to do some type of proof of performance based advertising here, so that you can get a sense of the efficacy of your model without spending a ton of money. And the reason it costs so much is not only do you have to spend and have a control and test group, but these tests can't go for a few hours or even just a day or two. The best tests run over a period of weeks, where we see true optimization rather than just temporal lift. So whatever your period is, ensure that it is the same. Ensure consistency in the way your tests are structured and run, and everybody will be much happier.

David Kirschner (18:01):
Now it's time for the rubber to hit the road. It's time for the initial readout. And this is the most important. You know, they say first impressions matter, you never get a second chance to make a first impression. So true. This is where I've seen more attribution programs crash and burn than anywhere else, in this first readout. And in many cases, it's a shame because it can take months of data scientists crunching away to get these first reports. But if they are not correct, I've seen the consequences. You know, if you have the wrong metrics, if you have made some kind of crazy small error that gets extrapolated into a Laffer, you really can't turn back at that point. We actually had one meeting where, two slides in, the C level person that we'd invited to the meeting got up and walked out because he didn't believe anything that we were showing.

David Kirschner (18:54):
The challenge is twofold here, because when you're doing an attribution program, you're recognizing that what you've been collecting all along isn't the full truth, whether it's last click or first and last or linear model, you know that there's more to it that a data-driven model might be able to extract. However, in the event that that model fails to do that, the first time around the confidence is lost. People say, well, we asked you to show us the new truth, but I don't believe this at all. It's a real challenge. So I can't stress enough to make sure that that first readout is correct. Otherwise the program's going to be cast into Dell. You'll lose confidence, possibly budget. And even if the program does persist after that, it is going to be scrutinized to the nth degree, to the point where it may not be the useful tool that you all know it can be.

David Kirschner (19:51):
So some final thoughts. First and foremost, set the data expectations early and clearly, and it's gotta be a high bar. There just aren't any shortcuts to doing a good multitouch attribution program. So make sure you're setting the bar high for data hygiene, and make sure that is extended to your partners, so that they are trained on your policies and there is commonality there in how you're handling data and treating it. Tools are nice, but rigor is really what makes it work. There is no substitution really for, you know, having a good, solid foundation of data. You know, even in the absence of that, however, you can get some interesting things from a data audit. You can find some interesting anomalies or pieces of data that you didn't know you were collecting that you know, short of having a full attribution program, can be useful.

David Kirschner (20:48):
However, let me caution you that—quote—any attempted attribution, which I've seen, well, we don't have these pieces. We don't have the cost data, but we can run it without that, right? You can, but I've seen that become problematic rather than helpful for organizations. Missing pieces don't help you complete a puzzle, and there will be gaps. Without testing them, those gaps could be deleterious to your program rather than not having one at all. And communication is key. You can't over communicate about this. This is why having the committee is really important. Successful programs contain transparency. They're well funded and they have support from the top down. You've got to keep it front and center. What I did at my last company was we put large screen TVs around the office that showed customer journey metrics, and people would stop at them and they would talk about it.

David Kirschner (21:45):
And it really was useful for just keeping data-driven decisions central to the organization. But more than anything, you need to become a sea-change agent, because attribution is, as I started out with, as much a change management exercise as it is anything else. And before you even start to put a committee together, a data dictionary, take a look around your organization, think about the resources you have, think about the goals you're trying to achieve. Think about your KPIs, your OKRs, your LNOPs, whatever acronyms you use—consider them, because things that you may not have thought about, like comp plans, can be impacted by attribution programs. I know many an SEM manager who was not happy to see us show up at their organization because it would allocate monies that he had claimed over to display or vice versa.

David Kirschner (22:36):
So as you're going through this, take feedback for what it is, feedback is a gift. And I got a lot of it when I was at Google, and I appreciate it all because it helped me grow and understand what I, you know, how to meet my goals and my client's goals better. I urge you all, once it's posted, to register for my upcoming webinar, that I'm going to do this fall on change management and how to become that sea-change agent. And with that, I will end, but I will encourage all of you to visit my side business. It's snotrocket.com, super classy and very well positioned for a COVID-19 outbreak. But nonetheless, if you're a cyclist, if you're a runner, swimmer, or just a wise guy, please stop on by. And there's a code there that you can use to get a discount. Thanks again to everybody at ObservePoint. And I hope to see you this fall at my next webinar.

 

Previous Video
Building a Data Foundation for Accurate Insights
Building a Data Foundation for Accurate Insights

Eric Hansen and Matt Crupe presented their tips for achieving effective organizational and data collection ...

Next Video
How Data Visualization Can Amplify Your Attribution Insights
How Data Visualization Can Amplify Your Attribution Insights

Discover how to communicate your attribution insights with Brent Dyke's presentation, "How Data Visualizati...