×

View Content

First Name
Last Name
Direct Phone
Company
Job Title
Thank you!
Error - something went wrong!

Chris O'Neill - Accurate data collection isn't easy. We can help.

October 23, 2018

Escape Manual Purgatory: Dive into WebAssurance

By Chris O’Neill of ObservePoint

Slide 1:

Thanks Brian! I hope everyone’s enjoying the conference so far. I’m just going to give a quick product demonstration of ObservePoint. So here we go.

Slide 2:

So with ObservePoint, one of the things we’ve noticed with a lot of our clients is they have a hard time QAing their marketing analytics data. So with a lot of our clients, we’ll see that the have possibly a static tagging plan. Possibly lives in Excel, or maybe one of the employees heads or some combination of both. And this is outlining what tags should be where. What variables should be firing. And what values they should be collecting across their website.

 

After they have a release, or any type of code change, a lot of times analysts will be left the task of validating that those tags are still working correctly. This process can be very manual. A lot of analysts will use a tag debugger. Sometimes there in the network tab or possible they’ve come up with a solution on their own using some combination of Charles, sylenmum, or tools like that. The problem with this is its very tough to be consistent. These analysts already have a full workload with reporting and gathering insights on the data. They just don’t have enough time to be consistent everytime theres a code release.

 

The other large problem they run into is it’s very difficult to be comprehensive. They have to result this strategy of spot checking or possibly pulling in other resources from other departments to be able to validate the data on the thousands of pages and several journeys that they have as they go through their tags on their site.

 

Slide 3:

What ObservePoint is here to do is we’re here to solve this problem. We’re here to automate this process and make it more comprehensive. So that when the analysts or the directors or managers are presenting these reports to the rest of the company trying to provide insights, they can come into these meetings with confidence knowing that the data is correct. Therefore  building more trust within their organization and pushing data driven decisions throughout the company.

Slide 4:

With ObservePoint, the first step of the process is to take that static tagging plan and we want to make it fully digital and make it a living document that lives inside of ObservePoint. And then we take that document and as were automatically scanning your thousands of pages and going through all of your different flows and checkouts processes and form fills, we can then validate that dataset with the digital tagging plan automatically and alert the analysts or whoever is in charge of QA when something is not firing correctly. What this allows us to do is consistently schedule these scans so that whenever there’s been a code change, code release, or sometimes if an ad hoc problem has come up, we can consistently schedule and kickoff all these scans whenever they need to. The other point that becomes very powerful when using observepoint is it becomes very easy to be comprehensive. ObservePoint can crawl thousands of pages very quickly. We can go through all of your different paths. Check out flows, form fills. In a very simple, automated, easy manner.

Demonstration:

 

So now lets go ahead and hop into the product and take a look around. With ObservePoint, we do 2 things:

 

First, we automatically collect all the data that you need to look at to QA your analytics tags. So we automatically collect all the tags that are across your properties. Collect all the variables and values. And we do that in 2 different ways. The first way is we call an audit. Essentially this is just a spider through your entire site where we will execute the page load events on every single page and this a large scan that we can do thousands and thousands and of pages. Well collect all the data and be able to show it you in a very easy manner. The second way we can collect your data is through what we call web journeys. And these are targeted flows. So a good example of a web journey would be a checkout flow, or a form fill. Basically, web journeys we can execute all the other click events or any other type of event that you would like to be triggered to validate your marketing pixels across your site.

 

The second of ObservePoint that we automate is the Validation of this dataset. And we do that by what we call rules. And we’ll get into rules in a little bit but just know that as were looking at all this data, as we look at an example audit and an example journey that we can automatically validate this entire dataset so that you don’t have constantly be logging in to the ObservePoint platform or looking through the data. You can simply receive alerts when something comes back outside of the parameters that you set.

 

So now we’ll just look into a sample 500 page audit, just to give you an idea of the type of data that we collect. So this is what we call a tag summary. These are all the tags that we collected across the 500 pages that we crawled. You’ll notice there’s quite a few here. And so we can give you very high level information on this page right here. We can show you how many pages were tagged with a certain tag. How many pages were missing of the 500 pages that we scanned that particular marketing pixel. We can show you tag versions. How many accounts. We can even show you how many variables it was collecting across that entire 500 page scan.

 

We can drill in a little bit deeper. We can go into the variable summary. And we can actually show you all the variables that were collected and all the values that were collected across that 500 page scan. So you’ll notice here all the Evars. We have all the props that were executed on the page load events of that crawl. So we can dig in and we can see here that were 3 unique values for prop 31. Prop 31 was set on 38 pages of that scan and it was not set on 263.

 

As we drill into this report here, we can see the three unique values that were collected when Prop 31 was fired.

 

And then we can even drill in even further and we can see the URLs where that prop coacted data.

 

This here is the page level view where we can actually show you additional information. We can show you cookie information. We can show you the full request log. We can even show you the console log as well. And then we can you tell you information about the page. How we got the page. Number of tags on the page. If there was a redirect. Things like that.

 

Now this data, just in this report, is probably not that valuable on its own but if you add the validation piece on top of it and your validating this dataset as it comes in you’re gonna be able to QA a large amount of data in a very short amount of time.

 

If there are rules that are violated, they’re very simply shown right here and you can see these are some dummy rules we set up. But 279 pages that we crawled had a failure. The rule name is “PageName & Channel is Set” and “Global Variable Rules”, those are the rules that failed. You can see that some pages had rules that were not applied. Here are the pages that were passed. But as we drill in to the rule that failed, you can see which condition of the rules failed and then we can even to the list of URLs where PageName was not set. We can then send this off to IT or Dev and they can fix the problem very quickly.

 

Now this is just to give you an idea of what type of data that we can collect in an audit and how it works; but in the hands of someone who is familiar with your tagging plan and what’s supposed to be firing and where, this can be very powerful.

 

So now let’s go ahead and jump in to the second scan. And this is the journey

 

With the journey, this is a targeted crawl or a targeted path through your site, so each one of these boxes is a step that performed on your site. Which could be as simple as a Nav-to. We could just be going to the page and showing all the tags that fired and again we can show you all the variables and values that were collected from our visit. But it could also be something like “click on the for-consumer button.” So you can see in her ein the action tab it tells us what we did then e can see all the tags that were generated from that action. So we can click into google analytics here. We can see the variables and the values that were collected.

 

And then if we want to validate this dataset on the fly, we can quickly click the rules button right here and we can select which variables we care about and the values that we expect to show up the next time we run this scan or the next time we have a release and we can simply “Add Rule” button. Now all of our rules are based on an “if-then” statement so maybe you’re thinking to yourself, “Well yeah, I do want to see this variable on this step but the value might be different next time.” Maybe it’s a user ID number, date, or timestamp. Or maybe there’s just a sep of different values that could be returned. Well now we can go in and customize these rules in here and we can change them to whatever we need. If we need to change the scope to back out, we can say “You know, maybe I care Variable T fires and I’m not sure what the value will be but I just care that it collects a value.” We can do that or we can even use regex. So we can say, “Maybe Variable UL is not just en-us. Or it could be fr-en. Or maybe even DE.” Now when we save this rule, when it goes to that step next time, it will check for these three values on this specific tag on this account and this variable and if it does not return one of these values, it will send an alert. And then we can go in and take a look quickly at what the problem was.

 

So with the rules inside of the journey, they look very similar. You’ll get a list of all the rules that failed. All the rules that weren’t applied. And all the rules that passed.

 

So now that we have given you a quick overview of what ObservePoint does, as far as the data collection and the data validation, let’s talk about what that looks like inside of your workflow.

 

So typically when your Dev or IT team pushes a release, you’ll want to kick off scans in that Dev or QA environment with ObservePoint. So you’ll want to kick off all of your scans so that we can quickly validate all of the data and make sure that it’s all working appropriately. If it’s not, we can catch it at the QA level. And then we can fix it. And then once its been pushed to prod we can then go ahead a scan again just to verify that nothing changed from QA to prod.

Slide 6:

 

So to return to our original slides, we’ll back it up. With ObservePoint, the idea is we want your tagging plan to be digital in a live document. Living in the cloud. Were gonna then use that logic to apply to the automated scans of your site. We’re gonna consistently schedule that with your releases or any other type of code release event. And it’s gonna be much more comprehensive and automated.

 

That’s all I got for now. Brian, I’ll go ahead and pass it back to you.

Previous Flipbook
Securing Your Analytics for a Stress-Free Black Friday and Cyber Monday
Securing Your Analytics for a Stress-Free Black Friday and Cyber Monday

Learn how you can lock down your analytics implementation in time for the coming holiday shopping season.

Next Video
Judah Phillips - Machine Learning for Marketers: Predicting Tag Compliance
Judah Phillips - Machine Learning for Marketers: Predicting Tag Compliance

Learn about supervised machine learning in the context of tag auditing and monitoring.