10 Organizational Tips to Get the Most Out of Your Data
I first want to say thank you to ObservePoint for having me here today. I feel really honored to part of a stocked docket, if you will, of wonderful speakers within the analytics world. I feel very honored to be here.
Before I start diving into the presentation, I wanted to give some perspective on what it is that I’m trying to accomplish with these next slides. First and foremost, not only do I want to share my organizational experiences of what I’ve learned working as a quote “digital data strategist” for the NBC News Digital team, basically I’m addressing some of the sentiments and situations that generally most of us who work in digital data often run into. And while some problems are business specific, others aren’t.
This presentation isn’t designed so much to revolutionize the way we think about data—we’d certainly need more that 30 minutes of time for that and probably many more, smarter people than me to address those problems—but at its core, what the focus of this presentation is, is to expose you to some of the underlying problems that I’ve heard expressed with many clients, analysts, developers, and customers over the years to create some basic organizational tips to create less confusion.
So, I guess we’ll just go right into it. What exactly is a data strategist? When I tell people in the outside world that I’m a manager of data strategy, I’ve often gotten a number of people telling me, “That sounds a little made-up,” or they’ll reference the Special Projects Manager in the TV show, The Office. For those in the community who may still not know what “data manager” means or what “data strategy” means, I’m going to go over what I am and what I’m not.
I’m a tools expert. I’m the guy people are going to go to when they have an issue with the tool. They usually are the head-scratcher problems, the ones that can’t be solved with a very small surface look at the problem that’s happening. The complexity of these problems are usually far ranging. They can be as simple as defining what is a visit, all the way to very complex segmenting problems.
So, I’m a problem solver, obviously. Things will eventually go wrong with something and you don’t want you analyst spending tons of hours looking for answers to a tool problem or an implementation, you want them analyzing. You certainly don’t want your developers spending a lot of time when they have other priorities, so I’m in that middle, negotiating between the parties involved.
Which is exactly the third bullet here. I’m an evangelist as well. I think this says it’s sort of dropped down in importance in our organization over time because we have so many key stakeholders now who are so keen and very intuitive with data, so I don’t need to evangelize why data, and understanding data, and the importance of accurate data because we have so many people on there.
I’m also—and last, but not least—I’m flexible. I contemplated leaving this off the slide, but I left it in. The reason I kept it on, despite the reason being fairly obvious, is that we’ve all worked with folks who look at a task and immediately assess it’s not their problem or it’s not within their scope of their role. So, they sort of kick the can down the road. While I won’t say don’t be that type of person, because there are things that will ultimately fall to you that are not your problem, if it has any sort of relation to analytics, then you should probably take it on and try to understand what that is and make it work.
What I’m not—I’m not a data scientist. I’m not a developer. I’m not an analyst. Though I do play an analyst a lot of the time because I help analysts in their day-to-day work.
Within the organization, you may be thinking, “We are completely company X or Y or Z.” And that’s probably completely true. No two organizations are exactly the same. Even if you were to compare media company A to media company B, they may be completely different. For instance, I know for certain that CNBC—while being in my greater NBCUniversal organization and is basically very similar to NBC News Business—they’re a different organization, they have different ways of getting data into their system and different visualization tools, different business objectives, different maturity of business. All these things can make your businesses different.
But just because so many things are different, many things are exactly the same. I just put up a bunch of things that I hear or have heard over the last 10 years. I literally last week, had someone come to me and ask me exactly what the definition of a page view was. And that’s not even being hyperbolic, that’s a true statement.
As we kind of move into what I call the implementation problem, what we’re looking at there is a purposefully convoluted chart about how things interacted within our organization prior to me coming on board. There wasn’t a clearly defined way to escalate issues. Marketing tools and analytics tools, assuming if you want to separate those and do different buckets, were handled in a very unstructured way. They were implemented with little context, hardcoded to the page, or lived in some legacy JS container tool that as implemented by a developer who had long left the company. If this looks like your organization, then I sympathize and understand what you’re going through. However, if your organization doesn’t look like this, awesome. That’s great.
The good news is this, that eventually this type of workflow isn’t sustainable. This is for those who have this workflow that’s shown here. It’s not sustainable and ultimately it will be clear to all parties involved that the complexity, or I should say the disorganization of the system, will need to be cleaned up and consolidated. This is kind of like what we had before and kind of what we move into here.
Tip number one: introduce an analytics product owner, or in my case, we call it a data strategy manager. So, it’s like a marketing person reaching out to developers. What we’re doing here is we’re adding efficiency to the system. You’re going to bring in one dedicated owner to not only manage and prioritize the influx requests coming in, but also the owner for data governance. That’s where ObservePoint would come into that little equation there. If something went wrong, this person would know why or how to fix it. But there’s still one huge problem, most of this work still needs to be implemented. You can either do that through developers or contractors or a combination of both, but the thing is, it’s still a complex system.
So, what we tended to do here, and what is tip number two, is to introduce an analytics architect. This person is going to be the lead developer who handles all of the analytics requests. What he’s doing is developing order from the chaos. What the analytics architect is going to do is create oversight, which essentially creates and disseminates down a newly built—
this is where we’re going to go here—a unified framework. Building a unified framework can mean multiple things, multiple people, but simply it means this; creating one over-the-top system that unifies all of your apps and sites so that everything means the same thing. Or in the best case, everything available on each platform is exactly the same. What do I mean? This could mean something like an event handler that creates easier track methods for measurement or more simply just ensuring that there is consistent metadata with all the events that are happening on the page.
The simplest, is they’d just be building out the structure for an analytics data layer. That would be the source of all of the metadata that’s provided with each of the events that are happening on your site. Analytics framework is the resource they go to. There’s documentation on it, there’s best practices put in place to follow, and all of this is designed to create efficiency and simplicity within the analytics framework and organization of what we’re collecting. The first three tips here have actually been the most successful things we’ve done within our organization.
And I’m not saying we’re perfect, we’re still learning, but these first three steps right here are probably—if you haven’t done all of these or any of these at any moment in time, you should consider doing it. Because I feel like once we did this, we were able to solve for a lot of problems, which I’ll get into shortly.
Before we get into the last most tactical tips of the presentation, I wanted to talk about points of failure with analytics and how we have discussed and approached things when something goes wrong. In any one of these points across the board. I put this in here only because I was identifying points of failure that happen in the system for analytics. The arrows represent everything comes from the implementation. It flows both ways, but the idea here is that implementation is the middle ground between these two things that are ultimately where the product owner lives that can kind of solve for these problems.
I’ve talked about this before, this is pretty basic, but I really think it’s important, tip number four is corral your marketing tools. When I say corral basically I’m saying introduce some sort of tool that can be built internally or using a third-party to manage your marketing and analytics tools. What you’re looking at on the left—I kind of put these into V1, V2 product features here. I would say Adobe Tag Management and Google Tag Manager are the ones and then we have stuff like Segment and mParticle, which are a little bit more tag management on steroids and have a little bit more product features. But any one of these would work for managing your system.
What I’m saying is move to one of these, build one yourself, whatever, it doesn’t matter, but super important. What that will do is no longer will things be randomly hard-coded to the site and forgotten about. Once you have all these things in place, you’ll be able to understand who’s using the tools, which pages they should be on because not all tools need to be on all pages, etcetera.
As a tip, I would recommend an audit in the first year, probably every six months, and then one every year after that to see if that thing is still being used. I think the question I hear a lot is: how do I know when to pull a tag? If you can’t find an owner—hopefully this won’t happen to you—but if you can’t find an owner and you’ve exhausted all options, it’s been on the page for two years… I have the tendency to just pull it. I guarantee you this: if someone’s using it, you’ll hear about it and if no one’s using it, you won’t hear about it.
Ultimately, again that’s a decision you can make, but usually if I have no owner and it’s been on there for two years and I don’t see any data and I don’t think anyone’s using it, generally I’ll pull it from the site. But again, it’s really important to corral your marketing tools and data. Use one of these tools, use your own, whatever, but get everything in. build a platform where everything goes into this tool and that’s a great way to start corralling your data.
Number five: contextualize your analytics development tasks. If you’ve ever written a solution design documentation that seemed relatively straight-forward, and you give it to the developer and the end results—or at least the first go of it, the first try you start QAing—it’s way off from the thing you were trying to measure. Most of the time, what I’ve learned in post-mortems from this type of situation that comes up, is that they didn’t fully understand what needed to be measured because they felt the documentation didn’t have enough context.
I’m not sure if you’ve had this problem, maybe you haven’t. Maybe you solved this by providing some other tools. But there are a few ways that I handle it and I think you could probably handle this as well. The most commonly used approaches are writing user stories, which I think are okay, it’s very text-heavy sometimes and that may not be perfect for the audience that you’re giving it to. I actually find it more useful building out a presentation on the actions you’re looking to measure using screengrabs and visual aids to assist. Either of them will work, user story or a presentation style, but you just have to know who you’re talking to and what works with them.
I write it here: “stay in-the-loop,” but it’s essentially open line of communication between yourself and the developer. For instance, we have a new today app that was being released for the anniversary of the show last year in 2016. We had a brand-new feature. It was like a digest that was effectively a new measurement study for a digest. The solution design document, which is basically a sharable Excel Google drive document, and if you were just to look at that document, it’s like this event measures this action and this event measures this action.
It didn’t make much sense without getting into the specifics of it, but when I was able to grab screengrabs and visually represent or simulate the actions that the user was taking, it clicked pretty fast with the developers and they were able to immediately implement the changes and the measurement that we wanted. It was super effective. I recommend doing that.
Number six. This is: know your tags and products. This seems simple enough at the surface, but I assure you that the more you learn about the products, the more efficiently you’ll run as an organization. Not only will you likely save money by being able to move away from managed services, but you’ll also be able to save money by knowing what your current tools measure and not having to bring in maybe another tool that measures something that’s exactly the same that you just didn’t know was there.
There are innumerable benefits to understanding in-depth, right? I’ve already mentioned one, cost-savings. But generally solving for problems will ultimately save time down the road when that trick comes back again like the problems that you solve for comes back again six to eight months down the road. You won’t have to spend a ton of time having to solve for that problem because you’ve already done it.
I’ll give you a real-world example of this. We had an analyst working on a pretty complex segmentation problem that included multiple visitations over a period of time. Because we were locked into a tool to do this segmenting, she started working on the project and came up with the result. The result was approved, it was put into the wild, people started using that as a business KPI. But then a few people started looking at the data, myself included, and it started to seem like what was coming in was not exactly the thing it should have been. The segmenting was off. It’s not like she did it wrong, it was a problem with the tool. The way that we identified and had used segmenting before was no longer applicable based off the conditions that we set for the definition of this segment.
What we had to do was first off, noticing it. The reason that we noticed it was we made some correlations because we knew the tool and we looked with multiple teams and eventually got it to work—but again we only knew that because people knew the tool. We all were able to understand that, and we now knew the tool even further. We’d go deep in on that product. Again, this is a place where a product owner would step in so you could take—the amount of time spent to solve for this problem was extraordinary, it was multiple hours of multiple weeks—the product owner takes that on, it relieves the analysts from having to do that work. The more time they spend on this type of effort, the less time they have doing the work that they need to do down the board in their work flow.
Cross-train and learn. After that moment, we were able to look back and show what we learned. We showed all parties involved what needed to happen moving forward and how to handle these future situations. I put here the old proverb: give a person a fish, feed them for a day, teach them to fish… whatever. It’s cliché, but it’s totally true. Teach them how to do the thing that they need to do so that the next time they can do the thing they need to do. Whether if it’s important, I think everyone should do it. You find the way that best works for you. If you use Google Documents or whatever the thing is that you want to use, use that and document the things that you’ve found because it’s very useful for people when they need to look back on things and solve for a problem that they have currently.
One thing I did want to talk about—I call it the “Billy Problem,” which makes no sense at the surface, but let me explain what the Billy Problem is. Billy was hired for a project. He was brought in to do all of the analytics because the developers were unfamiliar with the analytics, didn’t want to do the analytics. So, they brought Billy in to do the analytics work. Billy was a contractor or consultant who was brought in for six months. After the six months were over, everything was great. All the things that we wanted to be tracked were amazing. Billy did a great job. Billy leave the company. Three months later, things stop working for whatever reason. Billy wasn’t communicating with the other developers.
What I’m saying is, avoid the Billy problem. I’m not saying avoid managed services or consultancies or any of those things. If you bring those folk in, just make sure you document and understand exactly what they’re doing and cross-train all the thing that Billy had done for you. Because ultimately, like everything else in the world of analytics or technology, eventually it’s going to break. And Billy is no longer with the company, so you need to figure out how you’re going to solve for that. Avoid the Billy problem.
Open up the data. That’s the last point here. I want to talk about that briefly. Opening up the data—yes, democratize the data, you’ve probably heard that—but the reason why we were able to identify the problem with that previous problem with the segmentation and the loyalty and all that, is because it was open. Anyone could go in there and look at how it was being captured, how it was being visualized, and what that number actually meant. Because it was open, we were able to look at it and start to see with different eyes, different problems. And you could see that some of them didn’t look right. So, it looked like on person, but not the other.
By opening up that data, people were able to understand what was happening and consequently solve that problem. Cross-training learning is pretty obvious, but it definitely happens more than I care to admit, but we do our best to solve for that.
I could spend a lot of time on this. Number eight: set realistic timelines. The key point here is, as you work through your projects, you begin to understand that there’s a limitation of getting everything into the project at the get-go. I’ll go over it on the next slide, but the key here is to basically build trust within the organization. I think if you watch Star Trek, Scotty would say it’s going to take two months and then get it done in like a day. But also, don’t sandbag either, right? That’s the opposite of what you want to do. What you want to do is you want to be realistic. You should understand the limitations of the scope of the project that’s being worked on and how and what you can get done.
Let the project have its own stakeholders and you be the stakeholder for analytics. That is a separate parallel plan line. It’s like an analytics timeline. Analytics should have a separate timeline from the product that are being releases. Now I’m not saying don’t do anything. There’s always going to be some analytics that you’re going to get, but I’m saying don’t be over aligned on the product because the product’s going to be at their own pace and they might not consider analytics in their pacing. So, you have to pace your product parallel to their product and understand their timeline and how you’re going to do it. By setting realistic timelines, you’re going to build trust.
This kind of ties—they’re almost the same—but setting realistic timelines and moving into what I call MVP for the win. If you don’t know, MVP is minimally viable product. Understand the limitations of the scope. Identify the must-haves and don’t be afraid of missing data. I’ll tell a story about that in a second, but one of the thing that I do want to feature here is: evaluate the features of the product and look at them with new eyes.
What I’m saying here is that—and there’s a classic example of this—should slideshows be a page view or should slideshows be a new event? And how will that affect the impact of paid views as a business metric? These are things that you should conceptualize and think about as you’re designing your analytics design. When I said don’t be afraid of missing data, I’m not just talking about features that you can’t get into you MV product, but don’t be afraid of changing events structure to remove maybe some key performance indicators out of there because they are no longer key performance indicators just to keep that number high.
You’re building trust and communicating across all the business and letting them know what is and is not part of that standard that you’re setting. And again, it may not be all on you to make that decision. It may be a discussion across the organization, but don’t be afraid of doing that. Don’t be afraid of not including something as a page view just because you don’t want to lower the page view. I think that’s a narrow view of the business potentially. Again, don’t be afraid to rock the boat.
Accelerated mobile pages is what their called and they load on mobile devices. It’s a decent part of our business and we have a point on that because we just couldn’t get that into the MVP of the product. Of course, we message that out and a lot of people are very worried about that, but because we had old data and timeline and we said we’re going to have a fast follow and we’re going to get it all measured in the future, people were like, “Oh, okay, I get it. We can’t get it in because the blocker A, B, and C.” again, don’t be afraid of missing data, especially if you communicate that out.
This is my last one, number ten: allocate time for experimentation. Step out of the box. Look at the competition. Attend conferences such as this one. Investigate your passions. I don’t mean on the job, looking at baseball statistics or cat pictures. What I’m saying is investigate the passions within the analytics space. What we see, or what I’ve seen is that—I don’t know what the right percentage is either, like five percent, 10 percent, 20 percent, I think that’s basically up to the organization—but what I’m advocating for, whether an analyst, and analytics product manager, developer, just some free space to think about things within the framework of analytics. Whether that be a tool or some sort of database technology or some new tools to plug things into, whatever. If you have an analyst who’s normally using SQL and wants to learn R, etcetera.
It’s letting them use their curiosity to maybe solve for some business things outside of the normal tools that we have. Because ultimately, much like when I talked about earlier where you’re cross-training and sharing and the openness of knowledge and that can eventually help you understand future business problems by understanding the tools themselves, this is the same thing. It may be in such a case that you’ve gone to a conference heard about something or method or some new way of looking at something that will solve for suture business problems.
And the only way you’re ever going to know that other thing is by stepping out of your current tasks. That’s one of the most important things that you should consider when allocating time for experimentation. Is there an applicable way for me to use the things I’m learning for my own interests, for the business in the future? I think that’s a really important thing and everyone should definitely try that.
Finally, this is the end of the presentation. These are the top 10 tips. I put in: Introduce an analytics product owner. By the way, these are not in any kind of order. These are not like do this, don’t do that. Introduce an analytics product owner. Hire, train an analytics architect. Build a unified framework. Contextualize your development tasks. Corral your marketing tools. Know your tags and products. Cross-train and learn. Realistic timelines. MVP for the win. Allocate time for experimentation.
It’s very possibly that you’re listening to this and you’re like, “Peter, we do like seven of these.” That’s great. This might not have been the perfect time spend for the last 20 minutes. But if you’ve only done one of these things or two of these things or you’ve experienced all the problems that we’ve had that we’ve discussed here, this is a great way to get everything in alignment. You can solve for these problems. I don’t think any of them are ground-shaking or revolutionary. These are very tactical steps that once you do within an analytics organization, that are also extensible, that you can apply across your organization.
Again, I always wanted to make a slide where you can take a photo of it and this is the photo moment. This is my attempt at that, as an aside. Anyway, I really appreciate your time folks. I’ll be answering questions on the chat, so feel free to reach out. Thank you very much.