Standardizing Reporting In A Growing Media Landscape

November 2, 2017

Standardizing Reporting In A Growing Media Landscape

Slide 1:

I appreciate being here. I’ve enjoyed the summit thus far and I’m looking forward to the rest of the day going forward. As mentioned, my presentation is about standardizing reporting in a growing media landscape. And I do work at Nickelodeon, which is a really excellent place to work.

Slide 2:

Before we dive too deep into everything, I brought some fun facts about Nickelodeon. It was started close to 40 years ago and it’s owned by Viacom. It’s one of many Viacom brands, which you’ve probably heard of or have interacted with in the past like MTV, VH1, ET, and Paramount Pictures. The name “Nickelodeon” actually comes from the original five cent movie theaters that were called Nickelodeons. That’s why we are named that.

And when I discuss Nickelodeon throughout this presentation, I’m referring to both the Nick Brand and the Nick Jr. brand. Their audiences are both kids between two and six, which would be our junior audience, and the six to 12 year olds, which is really the sweet spot for our Nickelodeon folks.

One of the best things about working at Nickelodeon is that we get this really killer mission statement, which is: make the world a more playful place. We often get to use that as a trump card when thinking of doing fun things as opposed to mundane things.

Slide 3:

I’m going to talk to you today about what we’ve been doing since the media landscape is changing. This is a slide just to get your minds around what we’ve been up against. Originally in 2007, Nickelodeon has a Nick.com and a Nickjr.com website. Two websites across the entire portfolio of what we did digitally.

Then you can see around 2012, all of a sudden a ton of different things started to happen. It was part because people were starting to watch TV more and it was also partially because more people had access to cellphones, Roku devices, and Apple TV’s. It’s important to know also, when looking at this timeline, what we had in terms of ability to track has also really changed over this 10 year timeframe.

So what we were doing in 2007 for our websites, has changed drastically from what we were doing with our implementations and reporting for the Nick Jr. Android TV, which launched in March of 2017. As all of these new platforms and apps and sites came on board, we got to a place where, in the past year, we didn’t exactly have everything standardized and set up in a particular way. From a reporting standpoint, that is.

Slide 4:

Our goal here, and what I’m going to talk about during this presentation, is figuring out a way to make sure that all of our products were standardized and reporting in the same exact way. A couple things to note specifically about what Nickelodeon utilizes, first, it’s not on this slide, but we don’t have a tag management system. We use the Adobe Suite with Test and Target and Reporting and Workspace.

We build all of our apps and our sites into their own suites, so they’re siloed in their own suites. And then another really important point here is that the consumer experience across all of our apps and sites tend to be different even within the same brand. So Nick.com has a different feel and interaction from the Nick iOS app, which has a different feel and interaction from the Nick Roku app.

As we go through this presentation, I really geared a lot of it toward the terminologies that Adobe uses for web analytics. I’ll try my best as I go through to call out those terminologies and explain them as best I can, but of course, if you have any questions, feel free to ask in the chat.

Slide 5:

We started out this process and when I joined Nickelodeon, which was about two years ago, we had a lot of challenges lying ahead of us. As I had shown earlier, we had just grown exponentially in our digital space, creating a bunch of new sites, a bunch of new apps, over-the-top devices. And across our portfolio, all of our suites had a different naming convention and a different setup for what eVars, props, and events were being utilized.

Again, this is one of those time where I’m using Adobe terminology. EVars and props are really just dimensions to signify a reporting variable. The only different is eVars can persist across more than one call. Events are just a dimension to signify an action taking place in reporting. Across the different reporting suites that we had for each of our sites and our apps, we hadn’t figured out a way to make sure that everything was standard and was using the same list and the same understanding. This not only brought confusion to our analytics team that was trying to get an understanding of the business, but also to our stakeholders who were dabbling in pulling data out of Omniture.

The second big hurdle we had to overcome was that the values being populated within these eVars, props, and events were also not standardized. A really good example of this is something that most websites have, which is a homepage. Nickelodeon and Nick Jr. both tend to have a homepage. On our end, when we had a homepage that was on Nick.com or Nickjr.com, we were naming that value “homepage,” but when it was on an iOS app or Android app, we were calling that same page with the same functionality a “main grid screen.” then to make it even more complicated, when we went to the Roku and the Apple TV and the Android TV experiences, then we were calling it the “homescreen.” So we have these three different terminologies all meaning the same thing, but populating in different places across different sites and such.

Then the final challenge we had to overcome was that the backend of how our implementation was being utilized to populate these eVars, props, and events had a lot of differences between paths. So all of these three are the main point, which is that nothing was standardized. Our goal here was to tie it all together and figure out a path forward to make sure we were able to standardize everything.

Slide 6:

Here is our solution nav, what we did going forward from this point. The first thing that we did was a gigantic audit. As I mentioned on the earlier slide, our eVars, props, and events were pretty much a mess and we needed to wrap our heads around what the differences were and where those differences were occurring so we could figure out a way to start to standardize. Again, this was a wide range of apps and sites. You’re looking at about 14 different properties, so it was really important to us to map that all out to start from a place where we could really nail down what we wanted to standardize.

The second step in our solution was to create a reporting layout that was simple, that could be and would be utilized by all of our engineering team across all our different paths and suites. The next step, once we had done the audit and looked through our standardized layout, we realized that there were a lot of engineers who had been hard-coding values that were setting up these differences of values that I talked about earlier.

We decided to create a data dictionary to allow our engineers to utilize a feed instead of having to hardcode so many of our values that we wanted to populate, there for killing two birds with one stone. One, hopefully making it easier on our engineering team, and two, allowing us a little more control over what we wanted those values to be, making sure that they were the same across all of our sites and apps and suites.

Then the final hurdle, and this is obviously the biggest one, is that we then had to implement our single layout and to go through user acceptance testing, our UAT process, and we had to pass through the hands of our QA, our quality assurance teams.

Slide 7:

Part one: what does this audit look like? I’ve take a snapshot here, I’m sorry that it’s a little bit small, hopefully it at least gives you the sense of the scale of what we were looking at. Each row that’s a little bit tinted in green here where is says “eVar = v1” was an eVar, prop, or an event and just to put the scale into context, we had about 100 eVars, 75 props, and almost 150 events across all of our apps and sites. On the left hand side, we have the names of each of the eVars and on the top, we had a list of all of the sites and apps that we were utilizing. What we did was make a matrix out of those two fields to understand where everything was populating and what those naming metrics were.

Down the right hand column, we added some updated information that was really helpful as well, which was what a sample value was, which is one of the reasons we ended up deciding that we needed a data dictionary. For things like eVars, we showed the expiration of when the variable was expiring. So we could really find the best path forward of what we wanted to accomplish.

Slide 8:

Once the audit was set up and we had these matrixes put together, it came time to clean up all of our naming conventions. That was as easy as sitting down with our data governance team, our people who are really utilizing the data and where, and trumping somethings and be willing to make a lot of changes. That process allowed us to cut back on a ton of duplication we were doing.

Prior to us auditing and figuring out what names we wanted, we were using 89 percent of all of our eVars and 91 percent of all of our props. Once we were able to cut back, we were able to make it down to 40 percent of our eVars and 28 percent of our props. This not only helped our analytics team because there was a lot less to manage, but also it also gave our stakeholders a much more clear and simplistic path to understanding how our implementation was working. We didn’t have to wonder if eVar3 was eVar7 in a different suite, or if we look at video this way in one suite and look at video in a different way in another suite.

Another big help to that cutting back of duplication was taking Adobe’s standard variables and utilizing them. Prior to this audit, we had a lot of places that were bringing in Adobe variables and matching them to eVars or props. As were were going through this process, we really tried to stop doing that as best we could and in as many places as we could. Things like the app ID or resolution values Adobe was already standardly passing to us. We stopped using eVars or props to capture that same data.

Slide 9:

After that, it was time for what we dubbed the “Aggro Crag.” for those who aren’t very aware of Nickelodeon, or Nickelodeon’s history, at one point in time, there was a show called “Nickelodeon Guts” that ran on our air. And this is was, direct from the Wikipedia page, “Action sports competition series,” which is great. The series originally ran between 1992 and 1996. Each episode featured three young athletes competing against each other in four extreme versions of athletic events, culminating in a fifth and final round which sent three competitors on a race up an artificial mountain called the “Aggro Crag.”

The reason this describes what our layout was all about is because the Aggro Crag is divided into three phases. Those phases are colored differently and they have different challenges depending on which face you are taking. That was what we were up against with this layout. We had to be able to utilize this layout not only on the web, not only on phones, not only on tablets, but also on television, on the optv devices. We had to make a singular spec that was able to have many different faces, similar to the Aggro Crag, and the name sounds great so we kept it.

Slide 10:

When setting up our layout, we did three other very Adobe-specific things that I want to talk through. And i’ll try to dive a little bit into what each of these things mean. The first was that we left from a place where we were setting up in the code—what eVars are populating, what pops are populating, what events are populating—to setting up context variables. A context data variable is a developer placeholder for a reporting variable. It allows for more concise scanning. So now instead of our QA team to have to look at “eVar3 = homepage,” they could actually see “page name = homepage” within the code. It also gives our analytics team a little bit more of an ability to change what’s happening with the data.

Once we set up the context data variable, we also had to—and this is the last bullet on this list—develop a single  set of reporting processing. What a processing rule is, is just an ability for our team to manage what is being populated from a context data variable and where we want to pass that data into our eVars and props.

The final thing we did when setting up out layout, was discuss with our engineering team how best to get to the latest Adobe components wherever possible. The main goal, as we are a media industry, to get to a place where we weren’t utilizing Adobe’s Video Heartbeat library, to where we were. For those of you who aren’t aware, the Adobe Video Heartbeat library takes us from a place where we were sending a beacon call every 25 seconds as a user was watching a video to a place where we were sending a call every 10 seconds. That will allow us a ton more understanding of where users are dropping off from our videos and how far into our videos the majority of our users are getting, which was a really big win for setting up this standardization process.

Slide 11:

The final thing I really want to mention is—just in case anyone is trying to recreate these steps at any point. Even once we thought we got to a place where we had a layout we thought was really top-notch and was going to help our business and drive all of our goals, we needed our engineers and technical teams to look at it and start to implement it before we were able to adjust to a place where we were finalized. That’s because once you get into this process where you’re implementing, there are always technical challenges to overcome.

I thought the best way to outline this was with a Patrick quote—Patrick is a member of the Spongebob crew—where he asks if mayonnaise is an instrument. It’s very much like that. It’s very much our team deciding on something and then having to go through the engineering team to decide whether or not what we wanted to do was feasible and whether or not it was able to be done.

Slide 12:

The third part here was to understand and build out a data dictionary. I talked about this earlier, this was responding to a challenge where our different values were populating in all of our variables.

Slide 13:

A real world example of this was that we had these different naming conventions across some of our main sites, apps, and suites. Here, I’ve just showed our property features specifically. On our Android property, our property pages were being shown as a URL key and then a colon and then the word property. But on iOS, it was shown as the name “Property Space” and then slash and then the name of the property we were looking at duplicated twice. On web, it was a third way where it was the URL key, colon, but no space in between, and then the word “series.”

So what our dictionary tried to do and was utilized to find was what the developer had to lookup, in this case, the property page, what we want the standard pattern to be for property pages, which here we decided on property name, space, colon, space, property page, and then what an example value of that process looks like. The developers can just plug and play with our lookup values and make sure that they match what our standard reporting pattern is. And that all lives in a feed that they utilize, which allows for standardization and allows a little bit more hands off from the engineering team and a little more hand ons from our team, more control from our team, to make sure we’re getting standard values from everywhere we need.

Slide 14:

We’ve move on to the final step which was to take all of the things we had just done, implement them, go through user acceptance testing with them, and do some quality assurance. To do this, our engineering team and our team and product teams all sat down before we got started and decided the best path forward was to utilize a kanban waterfall approach to implementing all of our changes. This allowed us a lot of great things.

The first was that we were able to chop up each implementation, each specific call, into it’s own ticket. So a ticket for each call was set up for the engineer team to work on, and then the engineers were able to work on a ticket and then pass it on to our analytics team for acceptance testing. Then we were able to pass it over to our quality assurance team for some QA. the waterfall approach let us see those items in motion while they were happening. We were also utilizing HipChat, but it could easily be Slack. Just have a channel of communication open while this is going on. For each process, it took a different amount of time, on average it was about a month, month and a half, to go through and redo an entire reporting spec, get it UAT’d and get it QA’d to make sure we were getting the data that we wanted.

Slide 15:

Here’s where ObservePoint came in handy a lot. Not only was ObservePoint great at QAing and helping us really understand some of our automations for our journeys that were really important and that we knew had to be right, we were also able to dive in with ObservePoint Labs and do some really cool things.

First thing I call out here is ObservePoint Labs sequential validation tool. Sequential validation allowed us to see in order all of the calls that were firing. This became essential when we were looking at our video heartbeats. In video heartbeats, we needed to be able to see exactly which calls were firing when, and in what order, to make sure that we were maintaining the proper path and to validate the data we were going to get was going to tell us what we needed.

Slide 16:

Another super helpful tool on the ObservePoint site as we were going through this process, was the SDR creator. SDR is solution design reference and what this does is—through ObservePoint Labs—you can set up an API key for your Adobe implementation and your API key on the ObservePoint side. And you can utilize that to bring in all of your eVars, all of your props, all of your events, and name each of them by suite. And what processing tools are populating the data in each one of the prop events.

This helped out in numerous ways, but the main two ways were, one, it allowed us to make sure to document all of the changes we had wanted to make and set out to do within this process works and were working properly and two, it gave us a quick, easy way to look and make sure our processing tools were reporting into our eVars and props and events. Often times you set up processing rules and have to go into the data to understand whether or not that processing rule is giving and identifying what you want it within the system, but here we are able to quickly download an Excel sheet which showed us which processing rule was utilized to make sure that that data was processing and populating properly.

The final added benefit here is that once you have this document from ObservePoint, you can save it into an Excel sheet. We utilize that Excel sheet as our documentation moving forward to allow our stakeholders to see all of our work and all of the thing we had set up. And to make sure that going forward, if someone leaves the company, they’ll have all this information documented properly.

Slide 17:

That’s how we implemented our solution. Just a quick recap; we set out to understand how to get to a place where all of our products were reporting in a standardized way with an explosion of new products in the media landscape. The way we went about solving for that problem and getting to a place where we were standardized was first, auditing all of our eVars, props, and events across all of our suites.

Second, developing a standard and single reporting layout that all of our engineering teams could utilize. Third, we created a data dictionary with lookup values to make sure values weren’t being hard-coded by engineers and not standardized across platforms. Then finally, we implemented, we UAT’d, and we QA’d. And sometimes we feel like Artie, the strongest man in the world.

Slide 18:

Thanks so much for your time. If you guys have any questions, please let me know.

Previous Video
In Stitches: Assembling a Complete Visitor Data Profile
In Stitches: Assembling a Complete Visitor Data Profile

Setting the foundation. Merging online and offline data. Cross device tracking. Visitor authentication. Dat...

Next Video
They Came, Now What? Translating Emotion to Targeted Activity
They Came, Now What? Translating Emotion to Targeted Activity

Learn how to use credible data to create an emotional connection and link that connection back to an action.