Automating QA Testing for Failproof Analytics & Marketing Data

March 26, 2020

Even though Adobe Summit has been cancelled, we still wanted to make sure you have the opportunity to get the helpful tips from our session, "Automating QA Testing for Failproof Analytics & Marketing Data."

Chris Mavromatis, Senior Systems Analyst at MasterCard, and Mike Maziarz, Product Manager for ObservePoint, have teamed up to show how to automate QA testing to:

  • Gain powerful insights into your data
  • Streamline migrations
  • Comply with privacy regulations
  • Prepare for bigger and better things in the future

Hi everyone, thanks for being here today! My name is Aunica Vilorio, I’m the partner marketing specialist here at ObservePoint. Today’s webinar will be the on-demand version of our Adobe Summit Session, “Automating QA Testing for Failproof Analytics and Marketing Data” with Chris Mavromatis, Senior System Analyst at Mastercard and Mike Maziarz, Product Manager at ObservePoint.

Chris has been involved through various technical and business side roles in leading most areas of Digital Marketing for almost 18 years. He started his career in Digital at the Scottrade Brokerage firm and is now at Mastercard heading up the digital auditing and governance function. His main areas of focus have been website development, leading teams through content management systems migrations (Adobe Experience Manager among others), targeting and personalization, audience management and most recently website auditing and governance. Prior to these roles, he owned a Sports Production company Mavrocat Productions, Inc. that helped bring the sport of Strongman to the United States in 1998 and produced a show "The Strongest Man Alive Contest" that was seen in 105 countries through Fox Sports International.

Mike Maziarz is a Utah native who loves golf, basketball, and data. Before coming to ObservePoint Mike worked for technology giant Vivint Smarthome, where he was responsible for workforce management, retention, and business intelligence. Mike has been with ObservePoint for over five years first as a data governance consultant and now as the product manager. 

With that, we'll turn it over to Chris. 

Chris:

Hello and thank you for joining this webinar. I'm Chris Mavromatis for MasterCard and for those that may not know much about my company, I would like to quickly talk about it. 

We were founded in 1966 as Interbank Card Association and became MasterCard in 1979. Our global headquarters is in Purchase, New York. We are one of the top 20 largest companies in the world. We have office locations in more than 50 cities around the world with over 17,000 employees. We operate in over 200 countries and believe it or not, and I always forget which one it is, but we are in either Arctica or Antarctica.

We work with and for consumers, merchants, issuers, governments, and businesses. We host and maintain over 300 websites in over 5,000 domains globally. Our IT efforts are headquartered in St Louis, which is where I am based out of with technology offices all across the globe. Our technology efforts are performed internally and in concert with agencies. And we're instrumental in enabling global commerce and making payments happen around the clock and around the world. 

So what are some of our main challenges? One is the total number of our websites. Also our global regions are semi-autonomous and thereby we have some unique country sites spread throughout the world, our third party and agency efforts and how they interact and work with our internal efforts, the conversion and updating of legacy sites to the latest standards that we have, the challenges of GDPR and other privacy adherence initiatives, ensuring adherence to our approved list of digital media tags, and also the need for deeper overall insight into digital governance adherence. 

So what were the initial benefits of our automated testing efforts? 

We created over 270 website audits that provided powerful insight into a multitude of digital aspects of each individual site and these run on a regularly scheduled basis. 

We were provided clarity into how we were doing and having full and approved analytics, audience management and targeting capabilities coverage on every audited site. 

We created custom tags and these are very useful to help with GDPR compliance, detection of code for a social sharing tool we ultimately decided against utilizing, our approved digital media tagging, and our conversion from one privacy platform suite to another. We have a master list of all websites and create regularly scheduled audits based off of that list that are grouped into folders based off of specific criteria such as geography, product or initiative. This this master list of all websites. We can then use to know which websites we have audited, which we need to audit, and have our groupings of sites formally broken down into different sections, different folders based off of specific criteria. We create regularly scheduled audits based off of the master list that are grouped into folders based off of specific criteria such as geography, product or initiative. This allows us to ensure full audit coverage for specific groupings. The groupings also make it easier when we are searching for specific sites and seeing if there are  different audit results for a specific product line, for example. We analyze and investigate the results of an individual or group of audits once they have completed. We work to mitigate any issues, we might detect and run the audit again, or we wait for the next scheduled run of that audit if if it isn't an emergency to determine if mitigation was successful. When the next scheduled run of an audit or group of audits is complete, we repeat the above steps. 

Here you can see an example of one of those audit result pages specific to a privacy platform we were using. We run the audit again after our mitigation efforts and see if the results are improved when the next scheduled run of an audit or group of audits is complete. We repeat the above steps on and on for as long as is necessary. 

Some examples of data points that we look at are the presence or absence of specific digital media tags, both approved within the company and those unapproved; the specific versions utilized of our analytics audience management in targeting tools; the presence of duplicate or multiple tags; the page details page within within the tool, which includes cookies, request console logs and tag hierarchies. And in addition, we look at our page load times.

So what has been the impact to us so far? 

We have built automated spreadsheet reports that are sent on a weekly basis to our digital web auditing, privacy, and legal teams. These custom built reports, detailed specific variables that are of interest to the teams that receive them. With these reports being sent by email, teams are not required to log into the auditing tool for the desire data. As a new audit is created the results from it are automatically added to these reports. Some potential examples of the content of these reports are approved or unapproved tags, GDPR compliance criteria successfully being met for a site, and specific code snippets, both approved or unapproved, appearing on specific sites. Some media tags were being introduced onto our sites through piggybacking from our external consoles utilizing the tag hierarchy function of an audit shown to the right and performing some deeper research. We found this remediation efforts thereby needed to be taken through those consoles. We didn't have full analytics coverage on every page of every site as we expected that we did. And a few more sites than expected were using an alternate older analytics tool. 

So what is next? 

We are thinking of creating a stamp of approval custom tag for our sites that's run on a quarterly basis. This tag would have a multitude of variables that it would check against, such as full analytics, audience management coverage, full GDPR compliance, only approved tags being rendered and et cetera. And would report back both within the audits and through email to the appropriate parties. While a site can obviously change mid-quarter a site having fully complied with the variables within this custom tag helps to give a high level of confidence in its compliance to our overall standards.

And now Mike is going to take us through another use case. 

Mike Maziarz:

Awesome. Thanks Chris. Really appreciate you walking through these use cases. It's always awesome to see our customers, you know, leveraging ObservePoint in so many different ways to find value and to help in so many different aspects from privacy to tagging health. And so really appreciate those insights today and seeing that. 

All right, today I'm going to talk about migrations. This is a, another use case that we help out our customers a lot with whether it's migrating from you know, one analytics technology to another an ad platform to another add platform. But today I'm actually going to speak about moving from one tag management system to another. In this case, actually a newer version of that tag management system. So what's that's going to entail, is actually moving from Adobe DTM to Adobe Launch. And that is something that's actually coming up really quickly, that you have a deadline on. They're sunsetting DTM here by the end of the year. So this is something that will hopefully be very valuable to you and make the process not so overwhelming. 

So we have broken this down into a four phase approach as are a lot of moving parts and this should hopefully make it easier. So we're going to talk about copying and recreating all tools, rules, data elements, updating your deprecated methods that you might have and make sure that there are no errors thrown as you make this move as well as making sure everything is still works correctly and there's no gaps in the data. And there's so many other things as well, these are just a few things that we'll highlight today. And this will also help you break down exactly what you need to do and when to do it, to ensure that all this goes is smoothly as possible as you're hitting a moving target. So the first phase we're gonna look at here today is catalog. 

And so what this means is at a minimum, we want to make sure that you have a successful tag management migration is to maintain the status quo of analytics tracking while moving from your legacy platform to your new tag management solution. And because the main goal is to maintain the status quo, you'll need to establish a baseline or get a snapshot of the current implementation to check against this as you create the new implementation. So you want to have that baseline with this catalog. So you can either create this catalog via manual efforts or automation. Now when you think about that if you decide to go the manual way, you will want to make sure that you factor in for extra time for building out that reference or that documentation that's going to be your baseline as it will take time to do this. And you may not be able to cover all scenarios as well, just cause you only have so much time to do that. And you need to realize there may be some errors as well just because you're using  human ability and time to do so. 

The other option is via automation. So you could use a tool like ObservePoint or a solution like ObservePoint web audits that can be automatically scanned on your site and create a record of all the technologies, their variables and their values, and give you a good solid foundation of what you currently have on your site. This process is much faster, you know, exponentially faster than the manual alternative. It'll return current up-to-date details of what is currently implemented on your site. You may try and, you know, rely on your current documentation, but most likely that's out of date. So you'll want to use ObservePoint to give you that fresh look of how you're site currently sits. 

And then the real benefit here is that it's gonna be much more accurate than doing the manual way. It's faster and accurate and then you won't have that human error come into play as we'll use our proprietary auditing tools to do this for you with machines and computer power. 

So at the very least, the main goal of every migration should be to maintain that consistency and ensure there are no gaps in the data between the old and the new. So, however, if you have the time and the resources we recommend using this opportunity to, you know, look at maybe implementing some new solutions or new variables that might collect some more insights that you didn't have previously. So the first step is to catalog your current information, and then use that for baseline when creating a new property in Launch. 

The next phase is strategizing. So now that we have, you know, determined that we need to get a baseline. I wanted to kind of go over some of those tools. I talked about doing the manual option versus the automated option. So for those that are going to choose that manual route, I just want to make sure that you are aware of a few tools that are available to you. There's a lot of TagDebuggers out there. Adobe has one as well as ObservePoint that's checking all sorts of third party tags for you and that's a free option as well. So go ahead and check that out and get that downloaded in Chrome. You can also use Charles or Fiddler to kind of sniff through those network requests. And then you also have developer tools that are available to you in Chrome as well. 

Now these are all not ideal options. They're all going to be slow and somewhat painful to go through and get that. But it's better than nothing, that's for sure. And then with ObservePoint, that's another automated solution that you have, I mentioned briefly before, but this is allows you to set up scans of your implementation with built in rules that are founded on your current documentation to make sure that you know that your current innovation is correct. And then as you migrate over that it maintains that consistency and will alert you if there are changes. It's super fast. We can, you know, crawl thousands and thousands of pages really quickly for you. It's accurate and the tests that you set up and ObservePoint to go through this migration, you can leverage as you continue to make changes to your website and your implementations at the analytics level. 

So now let's talk about the migrate options. So we talked about how  we'll get prepared for it by, you know, setting up, getting an idea of what's currently on our site and then how we're gonna check to make sure that that stays the same as we go through the process. 

But now let's talk about the actual options you have to actually migrate. So the first is the lift and shift method. And the second is starting fresh. 

So if you turn, choose the lift and shift method it probably means you have a relatively simple implementation that you don't want to change much. And this would be a really good option for you. And this is just by choosing the upgrade to Launch button that already exists within DTM. 

And one of the greatest benefits of this option is that you won't have to replace your DTM embedded code as part of the upgrade process. Adobe will link your old DTM embedded code to your new Launch implementation. So you won't have to go back and switch your code across your site. No, you cannot deploy a DTM and Launch on the same page to work in tandem. That would be nice. But as you'll need to move tags over incrementally, except that they'll both rely on the satellite object and will be confused if mixed or matched. So it's kind of an all at once option there. And then also know that you want to double check that you're not using a dev created or unsupported satellite method before making this change. Search Discovery actually has a tool, a DTM to Launch assessment app to verify this for you. So go ahead and check that out if you're concerned about that before you go ahead and make this upgrade to Launch. 

So as for the migration. Here's some basic steps to migrate with upgrade as you upgrade to Launch button that came from actually Ben Robinson at Adobe. 

So the first thing you want to do is log into DTM and hit the upgrade to Launch button. 

Then find your new property in Launch.

Test that in dev or development environments in Launch that are intended to be used as you iterate through changes to ensure that the business logic you're configuring actually behaved as you expect. 

Then last, or excuse me, not last but, next step will be step four. And in this, you'll actually go ahead and test in stage. So staging environments in Launch aren't intended to be deployed actually on your site's staging environments. So this would be one that is as close as possible to production, and it's here that most customers would be running  automated tests and checking that code currently. 

And then on step five, we'll go ahead and link your Launch production environment to your DTM production environment via the embedded code described above. 

And then last, you'll actually go ahead and publish the prod. So as soon as you publish to prod, your shiny new thoroughly tested Launch container tag will be delivered into the browser immediately. And because the Launch container tag will override the DTM one and your browsers already will be set to the retrieve the DTM one. So it'll be really simple. No code to rewrite there. So the benefits of this approach of the Lift and Shift approach is that relatively, it's hands off your data elements and rules, including custom code, will be migrated over to Adobe Launch. 

Once you push the Launch to production, your embedded code will automatically begin referencing the Launch script instead of the DTM one. s

Now it's not all rose colored glasses here with  this approach. There are some pitfalls to be aware of. So something to know, the upgraded process does not automatically convert your custom HTML JavaScript tags into their counterpart with the extensions library. So if you want to take advantage of each extensions configuration interface, you'll have to migrate those tags manually after upgrading to Launch. One of the pitfalls to consider as well is that everything bad about your current implementation in DTM will now be transferred over to Launch. So, for example, if you have confusing rule naming conventions in DTM, those will also come over to Launch as well. So the other option here is starting fresh. And so this may be because you're hoping to make some improvements in your implementation or, you know, there's a lot of current issues you don't like, this is probably where a good opportunity to, you know, start fresh as well with your implementation. 

So some things to consider are extensions, data elements and, rules and how they all play a part here. So first in extensions in DTM, all tags that weren't Adobe analytics had to be implemented with custom HTML tags. Unless there's a special edge case, you're most likely going to have to convert HTML tags to extensions in Adobe Launch either one by one or all at once as they migrate. Set up will be different for each extension, but here are some questions to consider as you evaluate migrating tags to extensions.

First, which HTML snippets could convert to the corresponding extension? Have you set up different tracking IDs or report suites for analytics solutions to correspond with the various developer environments? i.e. dev staging production. And do you have any outage tags that you should update or sunset? So thinking about, okay, is this tag no longer working or is it not being used? Let's go ahead and clean up our site if we can at this time.

Then looking at the data elements, like in Adobe DTM data elements in Launch are dynamic values that you can use within Launch interfaced to pass information to different vendors. So as you migrate to Adobe Launch, here are some potential improvements that you could make. 

First update the naming conventions and make sure that those are sound and make sense for everyone that's going to be using this. Also, this is a good advantage to update your data layer. Those are quickly developed to kinda catch up with the times. This is maybe a good time to actually clean up and make sure it's actually, you know, passing out information that you were looking for and are using. 

Then lastly, we'll look at rules. So unlike a DTM where rules were divided up into page load rules, event based rules, and direct call rules, Launch actually now has a single interface for exposing all rules. So some considerations you'll want to make when migrating rules over to DTM. Is there a more intuitive naming convention that could be used for naming rules. Do they have duplicate rules inside your system or do you have rules that could combine to accomplish the same objective? And then how can you use rule ordering to improve data accuracy? So that's a tool in Launch that you can use to decide when rules are going to execute. 

Lastly is the testing phase. So once you get this all pushed out, you know, this is where ObservePoint really comes in and makes your life easier. So ObservePoint's Web Journeys will actually allow you to automate the testing of your critical of conversion paths, i.e. like your booking, your shopping, searching on your site, logging in, all those paths that are critical to your business can be done and simulated inside of ObservePoint for you and check for the corresponding measurement and their values to make sure that they're all firing as expected. Web Audits in ObservePoint can actually perform frequent scans of your top pages, perform actions to each page and even wait for some time-based triggers to fire as well. So as we all know, data layers are also instrumental in all of this and are the foundation of our digital marketing efforts now. 

So ObservePoint actually has that covered to your point, can actually validate your data layer and allow us to create automated testing and make sure that data is being passed from your data layer accurately down into your corresponding analytics technologies as well. 

So the essence of testing and all of this is comparison. So whether you're comparing your Launch implementation against documentation or your production environment, ObservePoint can help with this with our automatic comparison tools inside of ObservePoint. Your options for testing are, you know, as we've spoken before are, you know, either doing this manually or automated depending on the resources you have and available to you and the complexity of your implementation, you may really want to consider going the automated route. 

So some things to consider that you'll want to prioritize as you are doing testing here is, you know, those critical conversion paths. Making sure you have those clearly defined in what KPI variables are firing along the way. What type of pages that you want to constantly check and make sure are performing well and returning the data you're expecting. And then also making sure that the data layer, that foundational piece is acting as expected. 

So now that we've covered the high level steps involved with migrating and how ObservePoint can be integrated, and do a lot of that heavy lifting for you. I'd actually like to a share success story from one of Adobe and ObservePoint's mutual customers. Carnival corporation.

Carnival corporation is one of the world's largest leisure travel companies, and is known for providing some of the top travel destinations and experiences worldwide. It's a holding company that is well known for companies like Carnival Cruise Line, Holland America Line, and Princess Cruises. Carnival actually employs over 120,000 people and attracts nearly eleven and a half million guests per year. So given how big that is you can imagine a key to Carnival's success is that their various websites allow users to view their destinations, explore activities, and travel more. And underpinning all these websites and experiences is Adobe Experience Cloud to collect all the user data and optimize these experiences. And so it's very sound that these all stay and are feeding accurate data. 

So the first part here that we kind of talked about before is that's how they maintain consistency. So they, the systems, they are different. You know,  this process of migrating because Launch is meant to be an improvement of DTM. Some older methods were changed or deprecated making the one to one migration approach impossible. So TMS migrations too are also a moving target. So Tim who was at Carnival working in this with ObservePoint and Adobe, frequently had to make changes and updates to the existing implementation upon request from any of the five brands he's working with. Typically 10 to five new requests would be coming in from each brand on a weekly basis. So with such differences and frequent changes, there was no easy way for Tim to create a single source of truth to refer to during the migration. It was consistently changing. Because Carnival had used a single DTM instance to meet the needs of all five brands, there was a huge volume of tags that needed to be evaluated and migrated in this process. Combine that with the dynamic nature of their implementation. This volume could prove to be unmanageable if handed manually. 

At the time too, Tim's team also only compromised of himself and two others. So a really small team to do all of this for all these brands and the volume of tagging that was involved here. So the time they could spend on quality assurance and testing was, it was limited as it was without even taking into account these extensive testing requirements that would come with migrating from the older platforms to the new technical platform, and to perform QA manually do this transition would be extremely time consuming and riddled with human error and pretty much impossible as well. 

So before attempting the migration, Tim needed a way to create a baseline for their tags that would change with their implementation without requiring a ton of manual work upfront. Tim and his team determined to use ObservePoint's Web Audits feature to accomplish this. And due to the sheer volume of tagging rules they had to migrate, it was only natural they would miss something as they rewrote the code. So in some cases they found that they would deprecate a piece of code because they didn't fully understand how critical was, only to realize that the deleted code was necessary for the rule to function. 

So it was not an easy process even, you know, with ObservePoint. But this is what Tim said after all the migrations, "ObserverPoint is uniquely positioned to help anybody who needs to transition between these two tools. It made it really easy for us." Which is pretty profound given some of the headaches and the sheer volume of requests that he had to handle. 

So the bottom line here is that leveraging comes your point during the Adobe DTM to Launch migration can save time, money and make the whole task a lot, heck of a lot easier. 

So we've now gone through this process of how to migrate and how to make sure it's consistent and accurate. So what is next? We've noticed in talking with many of our customers that their data is accurate and it's still a struggle to get insights that lead to growth and better customer experiences. So ObservePoint has actually gone out of their way to acquire a technology this year to help with this. So we've recently acquired Strala who has technologies such as Touchpoints, JourneyStream and Prism. These technologies allow you to view your customer's journey holistically and gain a truly comprehensive view of what leads to conversions. 

So you can see here we've got our data accurate now and then we're trying to get these insights. And there's a big gap here. Oftentimes the data is siloed, incomplete, inconsistent, not granular, and not real time. 

And this is across all of your technology tools, whether it's ad tools, analytics, information from offline resources of call centers, all that fun stuff. It's all in different spots and it's really hard to bring it all together. So with Strala, they use that touchpoints product to standardize all that data upfront. They can do it with all your online and offline data, put it together in a complete consistent, unified data set, which will then lead to high trust and give you complete insights across the board for your company. So the first, the more detail about that first product of Touchpoints, which Touchpoints allows you to automatically define and standardize every journey, and every customer journey touchpoint before that journey even begins and creates a solid foundation for trusted data-driven insights. Then you can map each taxtonomy based to your data standard with Touchpoints to ensure all data can act as a single set. 

Using spreadsheets and a homegrown system for UTM and tracking code management is very labor intensive. So automating these processes with test points frees up time, labor, and labor so you can focus on creating and optimizing user experiences. With JourneyStream, tracking page search is relatively easy. Once you tend to track all paid media owned and earned under interactions, tracking becomes a time consuming error-prone process. JourneyStream automates this process allowing you to accurately connect interactions to conversions. So built on the Touchpoints foundation JourneyStream is a complete turnkey marketing and experience data repository that allows you to capture every online and offline interaction across the entire customer journey, creating a unified data set for trusted insights. 

And then lastly, the real power comes here with Prism. So Prism actually allows you to trust your ROI by allowing you to trust your data and Prism helps you do exactly that. Prism is a turnkey, algorithmic attribution solution built on a complete and unified data, enabling you to achieve ROI, visibility across all marketing experience efforts. So through these algorithmic and rule-based attribution models, Prism allows you to show your contribution to sales, prove the value of your efforts, and just to buy your investment across all channels and your content. So you can see here kind of the whole foundation of how Strala works and how all these work together. But really this gives you a holistic data across your online and offline data and really integrates and automates so you can do this at scale, essentially becoming the content intelligence King. 

So the key takeaways here of ObservePoint's use cases with Chris at MasterCard and with migrating from technologies and how Strala comes into place. Basically automated testing is more efficient. It less time, less manpower. You get automatic alerts when things go wrong. It's more effective, more accurate, you have fewer mistakes, and your QA process is much easier and embedded. You also can do it on a much larger scale and at more in depth detail. You can QA more tags, more sites, and scan more often than you ever could. And then the big thing here is peace of mind. The software tests can run in the background for you and alerts you when things go wrong so you can have that sense of security. 

If you're interested in knowing any more about these processes of MasterCard's use cases or Carnival's or just how to migrate any current systems you're working on or with Strala, please feel free to reach out to your ObservePoint rep or your consultant and we'd be happy to discuss this more with you. Appreciate you taking the time to listen to us today. Hopefully you found this insightful and valuable. If you have any questions, please don't hesitate to reach out. We'd love to help out and thanks again for joining.

Previous Video
Scaling & Automating Data Governance
Scaling & Automating Data Governance

Join Simon Pilkowski and Andrew Geddes to learn unique ways to simplify the process of scaling data governa...

Next Video
How to Ensure a Healthy Data Layer: Data Layer Governance
How to Ensure a Healthy Data Layer: Data Layer Governance

Rob English from Napkyn Analytics and Jordan Hammond from ObservePoint team up to discuss how to keep your ...