It starts like this: data stakeholders get excited about garnering actionable data for informed decision-making. So they decide to accelerate their data collection, adding more complex configuration to their existing tools or adopting more technologies into their implementation.
But one day their implementation goes down. What happens then? Either the data stakeholders don’t realize the failure and make bad decisions (including automated decisions), or they’re left in a mad fix trying to figure out what happened. All the while, precious data slips through their fingers.
Failing to implement a data governance program is dangerous. Best practice is to continually validate your implementation—the only way to make sure that not just one report, but every report, is correct.
Here are some specific tag auditing best practices you can apply to successfully verify your data is always right:
Building a Tag Auditing Culture
1. Build a culture of stewardship and accountability over tag implementation and auditing.
Somebody needs to be responsible for ensuring your analytics and marketing tags are continuously validated. This somebody is not the summer intern. Choose someone that:
- Has a strong technical background
- Understands the business value of analytics
- Has some seniority to implement processes
- Is data-oriented
2. Establish a reporting process for when adjustments need to be made.
Anything that can happen will happen. Something will go wrong. You can thank Murphy for that.
Make sure you have protocol in place so that whoever notices a problem (oftentimes an analyst) can easily communicate that problem to the person who can fix it (usually a developer).
If data is not being sent correctly, then whoever has the power to make the change needs to know right away.
Learn more: ObservePoint API Use Cases
3. Create and maintain data governance documentation, often known as a solution design reference.
A solution design reference (SDR) documents the variables a company deploys on their website or app, along with basic explanations, formatting conventions, and expected values. An SDR is a blueprint for how all of your web and mobile technologies should be deployed, and helps keep everyone on the same page.
So if you don’t have an SDR, please make one.
Performing Periodic Audits
4. Perform daily audits on your top pages.
The frequency and size of your audits will depend on your objectives. Performing a daily audit on your top 1,000 pages will help you quickly identify problems where they matter most.
In addition, running a daily or weekly audit on your external campaigns will help verify your campaign codes are attributing accurately.
Learn more: How Often Should I Perform a Tag Audit?
5. Perform frequent partial audits (weekly or monthly) to identify systemic problems.
If your website is template-based, most of the problems that occur will be systemic instead of specific to individual pages. As a result, it’s not always necessary to perform a comprehensive audit of your site, which could be an extremely long process.
Learn more: Determining the Optimal Size for a Website Audit
Audits During Development
When you’re working on releasing a new website, make sure that you perform intensive audits during all phases of development and production.
6. Perform quality assurance audits during your development cycle.
During your development cycles, you want to verify that each technology is reporting as expected. The development cycle is a dynamic process, so things can break.
Establish a baseline, and compare your QA audits against that baseline.
7. Perform a comprehensive discovery audit before release.
Before deployment, make sure to perform a full audit to test for functionality. When scheduling this discovery audit, make sure to plan out sufficient time to make any necessary repairs before your scheduled release date.
If you don’t identify errors before deployment, you risk losing data for as long as you remain unaware of the problem, plus the amount of time required to fix it. An ounce of prevention goes a long way.
If you have an exceptionally large site, consider performing a 10,000-page audit. From our experience, we’ve seen that a 10,000-page audit gives you as much of a gauge on site performance as a 100,000-page audit.
8. Frequently audit your site in the production environment with rules.
Audit your site immediately after release and regularly afterward. Changes occur over time, and a regular audit will assure that you know about data discrepancies as they arise.
Instead of having to comb through reports every time you perform an audit, set up rules to validate against your network requests and alert you of any failures.
Applying Tag Auditing Best Practices Over Time
Collecting correct, compliant data is a journey, not a destination. It is a continuous process that applies to dynamic data collection, not just an individual report. By applying tag auditing best practices, you create a reliable dataset from which you can then produce accurate reports. Doing this will enhance your confidence in your implementation and facilitate good decision-making processes.
To learn more about tag auditing best practices, check out What 57% of Data Scientists Hate About Their Job.
About the AuthorLinkedIn More Content by Matthew Maddox