Chris Baird - Introducing the Tag Governance Framework

October 24, 2018

Introducing the Tag Governance Framework

By Chris Baird of ObservePoint

Slide 1:

Alright, well thank you Brian. Again, my name is Chris Baird, I run the marketing group here at ObservePoint and we are excited to be here. We look forward to these days and it’s an exciting time for us to be able to bring all these great subject matter experts and really industry leaders in the analytics space together to be able to talk about such important things. Today I’m excited to talk to you about what we call the Tag Governance Framework.

Slide 2:

To help set the stage a little bit, I wanted to review the complexity of the digital marketing technology landscape. We have a lot of different tools out there. Some of which are free and others which are fairly expensive. For the most part, the majority of this are implemented by using a tag. I think all of us are very familiar with tags the challenges that are around them. But to kind of help paint the picture a little bit, we work with very large enterprises that spend a substantial amount of money on what we call premium web technologies. And these technologies are deployed using a tag across multiple websites, apps and devices. As we know, these digital channels are always under constant development and there are a lot of challenges that go along with that. As we look through the millions of different users and visitors that come in, they’re using multiple different browsers on different devices. The management of these different technologies oftentimes spans multiple teams across multiple different offices or even continents. So right now, the process to manually go in and check the deployment of these tags is very manual and very inconsistent and it really isn’t comprehensive.

 

Slide 3:

So, what we want to talk about today is this idea of Tag Governance. And tag governance is something that can be complicated or overwhelming but what we’ve been working on is working very closely to define a process with the corresponding tools and best practices whether they be internal to ObservePoint or external working with other partners and technologies to help you really accurately validate the deployment of these technologies across all these different channels. And so as tech has grown, so has our offering and our vision on how we actually see this coming to past.

Slide 4:

The idea of a tag governance framework is a process that outlines all of these steps. I’m excited to walk through each of these 6 different, what we call, phases. And we call them phases because they are not sequential by nature. I think sometimes we look at this as something that is step 1, 2 3, 4, 5, 6… But in a lot of cases what we’ve identified and what we’ve been able to learn is that these are really going simultaneously all the time. So, we’ve worked very closely over the past 12 months on some very large enterprise accounts that are really pioneers that are very advanced in their data collection and they’re very advanced in their analytics implementations. And we’ve been able to identify key wins and we’ve been able to identify different processes that they’ve used that helped them really coordinate a proper deployment of these technologies.

Slide 5:

Before we jump into each of these different phases, I want to talk about the why a little bit. Why are we doing this? We really distilled it down to 6 different benefits that we truly feel you and your team will walk away with.

First is really just developing this advanced trust in your data. To really believe that the data that’s being collected is accurate.

Parlaying that into the next one which is this idea of confident decision making. The ability to make really confident decisions. I think we’ve all been there where we’ve seen a report and questioned the integrity of the data. We’ve questioned whether or not we really believe the numbers are telling us what they say they are telling us.

The third one is a big one, the idea of becoming more efficient. The idea that leveraging automation and machines and technology to do what really would be impossible for a team of analysts to do in QAing all the different environments across browsers and devices.

The next is the idea that more than just the technologies that you’re investing in but the resources internally that you have towards supporting those technologies. Getting a more powerful ROI on your marketing tech and your teams that are actually using them. And relying on that data.

And then of course having a robust data protection. Ensuring customer privacy. Not just from any GDPR regulations but now we're seeing specific states enact privacy laws that we need to make sure our teams are compliant with.

And then the end result is a big one, and that’s just making sure were leveraging that data and improving the user experience. Delivering what it is they’re looking for faster or easier. Providing a better user experience for each of our visitors to our web experiences.

Slide 6:

So, jumping into the first phase, PLAN. It really does start here. Where making sure that you’re updating your tagging plan and having a tagging plan. One thing that we actually did a few months back is, we promoted a survey where we found out that just under half of the respondents did not have any sort of documented tagging plan for their analytics. To us this is something that is very important. The advanced enterprises that are truly able to measure and keep up with their implementations and the accuracy of their implementations are able to definitively define specific questions and then mapping them back to specific technologies.

Again, mapping back specific variables and props back to business questions you’re trying to answer. And business requirements and KPIs. And so, we’re excited to walk through what we have planned and what we’ve built to help you to do this and were excited to walk that later in the day.

But really in the same way you wouldn’t build a house without a blueprint, you really shouldn’t be implementing any sort of digital technology without first answering specific questions of really what you’re looking to collect and how you’re trying to do it.

Slide 7:

Some of the challenges here are employee turnover. Something that we’ve picked up on is this idea that one person owns the strategy and the tagging plan. And once they leave its either on their machine lost or its sitting on a google doc somewhere. That’s what we’ve noticed is a challenge in our space.

Also, this idea of siloed teams. Different teams working in different offices and really not able to communicate or have a dynamic document.

Also, technology limitations. Up until recently there hasn’t really been shareable document. A lot of companies don’t allow google docs for a tagging plan and sending back and forth excel spreadsheets isn’t something a lot of teams do.

Slide 8:

So, some best practices, really like I mentioned is in order to keep pace with business and deploying analytics, it really is imperative that a tagging plan is dynamic. It’s accessible to all teams. Its accurate. And it’s up to date. So those are some key things that we work with with some of our customers.

Slide 9:

Some of the outcomes. Obviously by implementing and keeping track of a tagging plan, it provides clarity. It makes sure and it really ensures that the team is consistently releasing the correct tags on the right pages. Knowing exactly what your goals are from the outset. And reducing any time and correcting any problems by really knowing what the standard was and what the initial standard was when you first defined the plan. So, this results in fewer mistakes in your implementation and a more quality up to date data for your decisions on an ongoing basis.

Slide 10:

Moving now on to the COMPLY phase. And this is basically the idea of certifying implementations that meet the standards of compliance. And really compliance in three specific areas.
1. Legal requirements. Anything pertaining GDPR and any other privacy laws in your region.
2. Internal requirements. Like we talked about inside the tagging plan. How the stakeholders inside of your marketing organization define what is being tagged and by which different technologies and mapping back again to this specific variables and props.
3.Vendor requirements. How specifically the technology is supposed to sit and what it’s supposed to look like inside the source code on each page.

Slide 11:

Some of the challenges around these areas are just understanding what technologies and which tags are deployed across of your different digital channels. This is something that a lot of enterprises are surprised to find out and discover piggybacking tags and third-party cookies that were set that they were unaware of. Also, just this idea of ensuring that there’s consistency across your website apps of which your collecting. Making sure that if you have a technology deployed on your home page that its sitting on the checkout page. Things like that.

Slide 12:

Best practices. So really it’s challenging to keep your eye on compliance. But it’s important. Obviously understanding the changing landscape of a privacy in anything GDPR is paramount to making sure that your protecting your any customer data. Also making sure that anything that changes inside your tagging plan your development is aware of. And that they are being compliant with internally defined business requirements as we talked about inside the tagging plan.

Slide 13:

So again, if you step back and say, “Well why are we doing this?” It’s really to make sure that we have an accurate data collecting process. That we’re actually collecting what we intend to collect and we’re not jeopardizing the integrity of our implementation and of our data set by not comprehensively that we’re compliant or not across all of our pages. So again, this idea of collecting the right data on the right pages.

Also, being able to protect our customers from any sort of a data breach. So, making sure that we’re taking any sort of privacy laws seriously.

And then really this idea of proactively minimizing any costly infractions and just overall reducing the company’s risk.

Slide 14:

Jumping now to the DEPLOY phase. The deploy phase is really when you take these approved plans that are compliant, and you put them into action. Most of you are probably using a tag manager like and Adobe launch or Tealium. This is that idea of where they really come to life.

Slide 15:

Some of the challenges here that we’ve noticed and that have been identified is this idea of weak communication. For some reason when the initial stakeholders define the requirements and when they actually get passed off to the deployment team, there’s not enough communication.

There are a lot of internal and external stakeholders. Some are possibly even sitting in different regions at different companies, for those using an agency to deploy.

Time zones can be an issue here.

Goals, business requirements, and just definition. And again, this is where incorporating a tagging plan and leveraging a tagging plan throughout this process is paramount.

Slide 16:

And then, jumping to the best practices. Really what you can do I think one of the things that we’ve been able to really evangelize is this idea is catching any mistakes inside of the development phase. Using some deployment discipline and making sure you’re only deploying what’s outlined in the tagging plan.

Also, just this idea of not working in silos. We talked about it just a minute ago but the idea of communicating and making sure, whether it’s a slack channel or a communicating inside the tagging plan, is important to make sure your communicating what’s actually being implemented.

Spot problems during the implementation.

Also running small checks as you publish. To make sure that it actually looks the way it should inside the TMS. And really identifying and starting to automate and define any sort of process inside the deployment phase as you go.

Slide 17:

Again, if we ask ourselves why and what’s important in the deploy phase, the outcome here is to make sure that we stay agile and we don’t slow down our development team. That we look for ways to communicate effectively to them and saving time in the back and forth. Also, this idea of reducing complexity. Making sure that our tagging plan is visible to those that are deploying inside the tag managers crucial.

Slide 18:

Now we can move on to the QA phase. And now we are really into the heart and the history of ObservePoint. And to be honest, the QA and deploy phases are really done simultaneously. This is the phase where we automate the testing of all websites, devices, and apps in pre-production environments.

Slide 19:

I’ll go through that in just a second; but really to identify some of the challenges here. You know, scarce resources is an issue. Not having teams large enough to test all of the different channels. And even if you did have teams that large, it’s very obviously, impossible to achieve the level of accuracy that machines can achieve when they comb through all those source code and all the different tags across all your different pages. There really are some human limitations to doing this same thing that software can do.

Slide 20:

Talking about outlining a few of the best practices. Obviously one of the key things here is leveraging automation to catch any of these errors before pushing them to a production environment. So, really having your tagging plan and enforcing a checkpoint or an actual sign off. In the same way that you’d sign off on creative designs or email copy being ready to sent and to show. In this same way, new webpage or a campaign should get the same type of data collection thumbs up. So, having those checks and balances there is crucial. And really the only way to achieve this is by leveraging automation. Defining your specific customer journeys or checkout process. And checking that step by step for the analytics or any MarTech tag collection. Defining audits with specific rules to certify that the data is being collected the way it should be.

Slide 21:

If you step back and say, “Why am I doing this?” You can’t scale any sort of an enterprise QA effort without automation. You can’t scale it without leveraging the tools that exist today. And really building them into your practice. So, the idea here is speed up the process, scale the process, and become more efficient. Spend less time doing manual QA and more time analyzing your data and preparing any sort of feature launch or feature campaign. So, being quicker to QA all of your browsers, all of your devices, all of the versions and adequately representing your entire visitor base in your QA process.

Slide 22:

So, now we move on to the VALIDATE phase. The validate phase is really all about testing your implementation immediately following any release. I should emphasize that there is some urgency here.

Slide 23:

Some of the challenges are really the same challenges that existed for us in the QA phase. Very similar. In addition to those, there’s typically a lack of urgency in a post-release testing practice. Just mainly because there a sense of accomplishment that the project is done or over now that it’s live. So, this is actually where it’s crucial to make sure that the live environment is the same as what you tested in the QA environment from a data collection perspective. So, you obviously still have the scalability and efficiency issue. There are obviously still scarce resources, the same as there would be to be do testing in a QA environment.

Slide 24:

Best practices here are really to take a step back, slow down, and compare each environment immediately after the release and compare it back up to the tagging plan. Here you should be using the same rules that you identified, the same journeys you created to test in your pre-production environment. You should also use those to test in your live environment. And you’ll want to compare those to ensure consistency. So, before the release, it’s important to already have built out a testing plan with how and what to test post release.

Obviously here is where your finding and fixing as quickly as possible any data collection errors and really relying on those alerts to alert you of any data collection issues. This is also where you’re going to test new technologies. As well as, I would say, anything that’s effected and could be affected by the changes So, obviously landing page, any sort of dynamic web content, making sure that everything is being collected the way it should.

Slide 25:

As we step back and ask ourselves why this matters, obviously finding errors critically before they become serious and impact any data collection. Obviously can make you much more confident in the performance of your data with your new release. And it really sets you up for long term for day to day accuracy

Slide 26:

And now we have arrived at the final phase of the tag governance framework, and that is MONITOR. And this is your keeping tabs on your tagging implementation in production. This is ongoing.

Slide 27:

Some of the challenges are similar to the past two phases. That’s scarce resources. Unmanageable growth. Technology neglect. I don’t think any of us have met a team that has more time and more resources than they need. Oftentimes after pushing something into a live environment, your team is already moved on to the next phase of development, the next project the next campaign. Also, as your site gets larger the required time to test grows. Also, with any new technologies, they are often given testing priority. Some of our older technologies can be neglected

Slide 28:

Some best practices here are this idea of continuous monitoring, continuous watchful eye, and trust building.

Slide 29:

This is my fun slide for the presentation. I’m not going to completely nerd out, but I will say that this idea of having a watchful eye “keep an eye on your data collection” and making sure that they’re finding any anomalies and that they’re alerting you to any sort of data collection issues across any of your customer journeys, predefined audits is key.

Slide 30:

Again, if we step back, the takeaways here that we’ve identified is this increased trust in your marketing and analytics over time. As you start to develop a release validation and tag governance, clearly define process ongoing monitoring helps build trust. It also minimizes errors that could affect future releases that therefore you can adjust your tagging plan to affect those. Also, just maintain the value of your implementation over time.

Slide 31:

So, we’ve done a full circle and we’ve come to an end of talking about this tag governance framework in each part at a very high level and we specifically didn’t want to jump into deep and get to granular. We wanted to lay a very general outline. To just illustrate that this is an ongoing process of governance and efficiency.

So, what I’d like to do is take a quick step back and in 60 seconds walk through this and give a real-life example of maybe how you and your team would make this all come together.

Slide 32:

If we look at step by step in a linear format, you assemble the stakeholders. Here your gathering business requirements in the planning phase. Your mapping back to specific variables and your converting requirement into technical documentation that dynamic and shareable.

Then under the comply phase, this phase really runs parallel to compliance. Your determining the privacy standards. Internal and external compliance requirements.

And here you then using those requirements to build a tagging plan and define any business rules that need to be followed. Next, you would give access of your tagging plan to your development team so they can then follow it and use it as they deploy.

Chances are your team is using a Jenkins push any changes into a staging environment. At which point, QA would run automated tests to find any reporting anomalies and any pre-production environment. We all do this. This is a very basic flow here. But something unique ObservePoint customers, you can leverage the ObservePoint API to create a ticket in your ticketing system and notify developers of any errors or anything that they need to fix. And issue a new audit to confirm that fix once it’s done.

If this all checks out, you push this into production. Then re-apply like I mentioned before those same audits and journeys from QA to production environments.

So after the validation phase, you’ll want to set up ongoing monitoring so that your data collection has valuable insights that you can use in your analysis and you can accurately inform your teams as they’re putting together new marketing strategies and incorporating the data into their everyday decision.

Again, accurate data. And this idea of accurate data is going to lead to better decision and better analysis throughout your team.

So, in conclusion, this is an ongoing process. It is important to identify individuals on your team that own the different areas of this framework.

Slide 33:

It’s crucial that assignments be made, and individuals know their assignments and their responsibilities throughout the framework. It is important to remember that these phases are happening simultaneously and continuously. This is an ongoing process of governance and efficiency that should really be the blueprint to how your team releases technology and validates that it is deployed correctly. If we step back and ask ourselves ultimately, “why are we doing this? What is the purpose?” It’s really to help you get it right.

Slide 34:

Whether you’re on a digital marketing team. Whether you’re on an IT team. An implementation team. A QA team. Or whether you’re just an analyst that relies on the data. Ultimately, the idea here is to get it right.

Slide 35:

And I hope that you learned something. That you were inspired to make a change to your existing validation process. If you have any questions about our offering or the different solutions that we have in place to help you and assist you along the way, please reach out. We’d be happy to walk you that. With that said, I know we have some great sessions lined up with you today. We have some amazing speakers that have spent a lot of time preparing their comments and remarks. We’re excited to have all of you here with us and with that, I will hand it back over to Brian. Thank you.

Previous Video
Chris O'Neill - Accurate data collection isn't easy. We can help.
Chris O'Neill - Accurate data collection isn't easy. We can help.

View a high-level product demonstration focusing on the core capabilities of ObservePoint’s WebAssurance.

Next Item
5 Pillars of a Successful Website Tagging Strategy
5 Pillars of a Successful Website Tagging Strategy

Without an effective data governance program, companies run the risk of data loss, inefficient data analysi...