Technology Governance: Identify & Validate MarTech for Accurate Data & Actionable Insights - Arthur Engelhard, Newfold Digital

Your company relies heavily on tags to power your analytics and MarTech solutions. So when errors crop up and tags break down, you’re left with bad data, lost ROI on MarTech, negative user experiences, and important business decisions being made on inaccurate data.

This session focuses on empowering your team with data governance best practices, including how to:

  • Audit your site and app to uncover tagging errors and gaps in tracking to ensure your data is complete and accurate.
  • Create custom rules and requirements to test against live implementations.
  • Recreate user paths to verify functionality and analytics integration.
  • Monitor your implementation and set up alerts to catch errors before they impact your data.

 

Arthur Engelhard

Senior Digital Implementation Manager, Newfold Digital

Arthur was introduced to web dev and web analytics in 2010 working for a small start up. He gained a great passion for data capture, quality, cleansing, and presentation in just a couple years. After leaving that position, Arthur worked in a regional bank focusing entirely on database administration and quality controls where he gained a great appreciation for operating at scale, but in a compliant and legal manner. In 2018, he followed his passion back to website analytics as Digital Data Implementation Specialist at Endurance International (acquired by Newfold Digital in 2020) on the verge of GDPR's debut. Leveraging his past experience, his role for the last 2 years has been intensely focused on maintaining their digital marketing and analytics technologies while complying with an ever-changing and challenging legal landscape.

 

Mike Maziarz

Lead Product Owner, ObservePoint

As Lead Product Owner at ObservePoint, Mike Maziarz focuses on identifying and designing solutions to solve technology governance challenges for data-driven organizations. Prior to joining the Product team, Mike spent five years understanding and solving problems for digital marketers and analysts as a Customer Success Manager. Before coming to ObservePoint as a data governance consultant, Mike worked for technology giant Vivint Smarthome, where he was responsible for workforce management, retention, and business intelligence. Mike is an Oregon native who loves golfing, mountain biking, and data.

 


 

Mike Maziarz: (00:41)
Yes, welcome to the Technology Governance session for Validate. This is where we'll be talking about ways to leverage using ObservePoint. And validating your implementations and your digital world. I'm Mike Maziarz, Lead Product Owner here at ObservePoint. I've been here for about seven years working on the front lines with our customers, solving their their problems, and finding solutions for them at a customer support level, and now for the last several years, I've been doing at the product level. So, really excited to be talking with you today. My counterpart here today is Arthur, and I'll let him introduce himself.

Arthur Engelhard: (01:18)
My name's Arthur I'm the Digital, I guess, Tag Manager Manager. So I work at Newfield Digital, and we have about 120 websites under management. I've probably spent about 90% of my time on 12 of them, but we've used tag managers from Tealium IQ, primarily Google tag manager, but we are getting into and using a lot more of Adobe Launch lately. So, it's been a pretty exciting time to be managing the tag managers.

Mike Maziarz: (01:51)
Yeah, definitely. So, let's jump into what are we going to be talking about today? So with that, we're going to be talking about one of the most powerful tools ObservePoint offers, which is Audits and what they are, and how they can be used, and some best practices, especially ones that Arthur uses on a day-to-day level. And, then we'll be talking about how to get more out of those Audits. So Audits can be pretty simple and straightforward, but there's a lot of tools at your disposal to get more out of those for different use cases. And ,so we'll be going through those. The ones we'll be talking about are defining Rules and setting those up to enforce your standards. And, then talking about the ObservePoint custom data tag, and then followed up by remote file mapping.

Mike Maziarz: (02:37)
So Audits, what are they? Essentially Audits are an automated crawl that gives you a snapshot of the technologies or tags that are found on your website. It also gives you all the page data that's corresponds with that to give you the insights needed to make sure your customers are having a great experience and that those tags or the technologies are loading correctly. They can simply do the work humans could never do. They can be very small and strategic, or they can be very general and crawl hundreds of thousands of pages. And, these can be things that can be set up manually and just kick off for just quick validation and release validation. Most customers use these as automated tools to run in the background and alert them if there's issues, just kind of that constant watch for them. You can also set these up to run from different locations and browser view ports to mimic a lot of different personas and behaviors and use cases as well.

Mike Maziarz: (03:31)
So some common use cases that Audits are used for are technology and for inventory. So, to get an inventory of what tags or technologies may be on your site. It's hard to know that just by going through a few pages manually so this really helps to do that. And then we don't only audit for technology presence only, but all the variable data that's collected inside. So, we can validate to make sure that information is being captured and sent correctly and as expected. Also release validation is a huge part of what we do to be part of that QA process, as you make different shifts inside your technologies. And then, I mentioned experience earlier. So we, we help with broken experiences and pages, even some privacy validation as well to make sure what things are being loaded and where they're being sent to are in your expectation of your standard, as well as your customers' beliefs. And lastly, we can definitely help out with site performance as well with these Audits to make sure that you are giving a great experience as well from the pages are loading and then that tags aren't being lost in that experience. So Arthur, let's pass it over to you and hear what you're doing to solve some of your use cases with Audits.

Arthur Engelhard: (04:48)
So, I think when I was first introduced to ObservePoint maybe three and a half, four years ago, I think I had the same reaction that a lot of new users have and that's, oh my goodness, I can test all the tags, all the data on all the pages all the time. And, so that's what I tried to do. I created this mega audit that was super detailed, had rules checking everything from our digital acquisition platforms like Facebook and AdWords to our implementation of analytics, whether it was Adobe or Google Analytics. And it was crawling thousands of pages, and I quickly became inundated with data. I normally very much like working with data, but I have to be realistic about how much time I have every week. And so rethinking how to approach the problem, I came up with the idea of like, let's audit with purpose.

Arthur Engelhard: (05:39)
Let's make sure that our audits have a specific, tangible goal and a purpose to them. And so, we had to transform our mega audit into lots of little ones that were a little more targeted but had purpose. And, it helped a lot. We used naming conventions and labels and even the ObservePoint folder system to make sure that we could keep organized with how many audits we were running and whether they were passing or failing and falling up on failures. But, you guys can figure out how to name your audits yourselves, I'm sure. So, to get into like, what exactly that structure that methodology looked like, I sort of based it around the idea of like having a pyramid. At the base of that pyramid is a general audit, very large, testing lots of pages.

Arthur Engelhard: (06:27)
I ended up calling it a discovery audit because it would find so many weird things like, you know, if you go on a site and start crawling for 5,000, 10,000 pages, you're going to find broken links. You're going to find maybe places where there are links to legacy code bases that the user shouldn't be able to get to. So I ended up calling it my discovery audit, but it's just a general audit. And you're just going to let it run and let it explore. You're not going to use exclusions much, and for those of you don't know, exclusions are a way to tell your audit to not go to certain places. So you're going to be pretty light with those. The idea is to probably just reduce the noise a bit. So, sometimes you'll have like PDF download links or links that take you off to a different code base or another one that we had a lot of trouble with was a search result pages where you can tab through the search results, and ObservePoint kept thinking those were links...

Arthur Engelhard: (07:20)
So I had to tell it to stop, but the point is like, you should be pretty light-handed. And with that, you're going to want rules that are equally as vague and easy to digest because all of this information over thousands of pages, we don't want to also have to be mining through every single, tiny little custom dimension, eVar, prop error, too. So our rules for such an audit, are going to be equally as vague. So that's the base pyramid and then the next level up, I ended up calling it sort of like a site section audit. We're going to group up pages of our site in such a way so that if they share maybe like a code base or they share a tag management configurations or have similar analytics requirements -- think of it like, you know, your slash blogs, right?

Arthur Engelhard: (08:12)
You have slash blogs, slash blog authors, topics, categories, all the individual articles, but basically it all sits on the same code base and has really similar requirements from an analytics perspective. And when we do this, we're not going to scan through, you know, 10,000 pages. We don't need to. I know there probably are 10,000 blog pages; there's plenty of them, but we're only going to do about a hundred. We cap it at about a hundred, and you're going to be pretty heavy handed when it comes to the exclusions or inclusions, whichever floats your boat. But you're wanting to keep that audit those hundred pages isolated on that part of the website. You don't want it adventuring and exploring and discovering things. You pretty much want to keep it there. You're going to get a sampling of those pages.

Arthur Engelhard: (09:01)
The reason we do that is because if you're going to find a systemic issue with the way that you've done your tagging or the way that you've implemented your digital marketing tech, you're probably going to see it in a hundred pages. You don't need to go to the extent of scanning 10,000 pages, just to find out you made a typo in a custom dimension or an eVar or a prop, right? You're going to see it in a hundred pages. And again, that's just going to make this a little easier to digest. You're going to find the problem. You're going to isolate it quickly, and you're gonna be able to resolve it rather than saying, well, it looks like I had 10,000 errors. Well, I figured that out with 100 pages I didn't need 10,000. And so likewise, this audit being slightly more specific is going to have slightly more specific rule sets.

Arthur Engelhard: (09:49)
And this is where we are going to get into testing those custom dimensions if you're using Google Analytics, eVars and props for Adobe Analytics, events, you're going to validate that your digital marketing tech stack is there. So if it's your Facebooks and AdWords and what-evers of the world. So, we're just going to make sure that, you know, throughout all of it, that those hundred pages are specifically being tested for that stuff. At the very top of the pyramid, we have sort of an inverse of the relationship between audits and rules. Normally, we say, we have this audit, let's see how many rules we can apply to it and test those things. In this case, what we're going to do is we're going to say, we have this one really specific functionality we want to test.

Arthur Engelhard: (10:36)
Maybe the user lands on a page with a specific parameter. Is a cookie getting set? Is a modal showing up? Is the CTA correct? Is the link there? Is the certain tag or certain analytics capturing a certain piece of data? And so in this case, we're not going to have ObservePoint land on a page and decide to crawl all over the place. We're actually going to give it a list of URLs and say, these are the ones you're going to test. And then I'm going to have a very specific rule that should evaluate on each of them to make sure that that is happening. You might've noticed that some of the things I mentioned like checking cookies is not something that's native to ObservePoint. So in these cases, we'd probably end up leveraging something like the ObservePoint custom data tag, to look for that cookie value and make sure that it's being set.

Arthur Engelhard: (11:25)
There's been a lot of cases in recent memory where our acquisition team is asking, if a user comes in from a certain place, are the right cookies getting set for Google or for whatever platform it is that they're coming in from. And so this can help with that. You can say, well, ObservePoint, we're going to land on this page with this specific URL. After the page loads, is this specific cookie with this specific value getting set? But again, that's really specific and it's that the top of the pyramid. So at this point you have the full pyramid of, we know that our website is covered from a basic tracking perspective. We also have checked that at a more granular level through sampling that our analytics tags and our marketing stack is there. And we've gone another step further to test that not only that, but a certain functionality that's key to our business is also there. And so I've mentioned rules quite a bit at this point. So I think I'll pass it back to Mike to explain what rules are now.

Mike Maziarz: (12:29)
Yeah, thanks Arthur. Yeah, that's some great core best practices here to get the most out of ObservePoint. And like Arthur had mentioned, you mentioned some rules and some of the things that we're going to get jumped into next. And so, this is how we can take ObservePoint to the next level is what we're going to be jumping into now. Now we have that core foundation of what an audit is and kind of how to set them up in your account. We're gonna jump into some of these tools that ObservePoint offers to get them more value or more bang for your buck, so to speak. So the first thing we'll talk about here is Rules. So Rules are custom to your implementation inside of ObservePoint or as well as with your analytics implementation or other vendors.

Mike Maziarz: (13:12)
And so these are used to ensure that certain tags and variables are loading as expected. So this is where you can define your standard inside of ObservePoint and make sure that that actually happens. And we can alert you when we see something different from the standard as we do these Audits. So for large Audits, you'll want to use probably some more generic Rules just to make sure that the presence is there and maybe some key variables. But, then you can also do specific Rules for more targeted Audits, like Arthur had mentioned. Rules can really be general in nature or are very specific and even quite robust when it comes down to it, but essentially at the core, they're just if then statements. So if a condition exists as we come to a page, then I expect this outcome to happen. So this tag to match the variable of another tag, or to have these certain values and these certain events to fire. So really the core to getting the automation set up inside of ObservePoint is Rules. So as these Audits go and run in the background for you, you can have that peace of mind that you'll be alerted if something is deviating from your standard. So let's pass it back over to Arthur to jump into how he's using the Rules.

Arthur Engelhard: (14:22)
Thanks. So, yeah, I like rules because I don't have to actually go into the audit or the journey to figure out if things are working. I get this nice little green check mark that makes me feel all warm and fuzzy inside. But if we go back to our methodology, we've got our pyramid of audits. We got our general discovery audit as the base. And what this is doing is again, scanning 5,000, 10,000, however many thousand pages you want to scan. The rules applied to that audit should be equally as vague. We're just going to be testing to make sure that our basic functionality and foundation for our tracking is there. What does that mean? That means are tag management systems there? Not necessarily, you know I'm sorry, the tag managers or the analytics is there, not necessarily every little piece of data for the analytics is there, but just that it's present.

Arthur Engelhard: (15:15)
And I'm finding now actually testing to make sure that our third party compliance scripts are running too. This would be like OneTrust or TrustArc or whoever you use. But making sure that that's present on all the pages as well. You can also include things like the AB testing platform, making sure that that's present, and we're not diving into all the little details. We're just trying to make sure that the foundation for our tracking is there on all the pages. And you're going to find things that'll help you find gaps where simply the tracking is not there for whatever reason. And otherwise it'll just say, "Hey, you scan 10,000 pages and 9,993 of them have tracking." And that should help give you a good feeling about where you're at from an implementation standpoint. The next level of that pyramid is the site section audits.

Arthur Engelhard: (16:06)
These are where, because the audits little more specific, we're going to be more specific with the rules. We can be because there's not as many pages to test. So it's okay if we gather a bunch of extra data and fish through it a bit, because it's not going to be that much. But we are going to get into, you know, the digital acquisition platforms. Is Facebook collecting the right data? Is AdWords and DoubleClick there? Are the right tags there at the right time? And of course, analytics, right, is the Google Analytics script running? Is it capturing all the right custom dimensions? Not, you know, not just is it capturing, but maybe you want to make sure that a certain custom dimension equals a certain value. Or, RegEx is your friend here, right? You can say that I expect this custom dimension to be one of these values.

Arthur Engelhard: (16:50)
It can start with, end with, whatever the requirements may be, but you can get definitely more detailed here because you want to be validating the implementation of your analytics at this point. Reminds me, we had an issue where we like to give our pages page types so our analysts can include or exclude large groups of pages at their leisure. And we had all sorts of ones like checkout or help or blog or landing page, homepage, whatever we have one called company info, and that's just the usual about us, contact us, legal, things like that. I remember delivering the requirements to our developers being like, Hey, can you update the data layer to make sure company info is the page type? And they gave it back and said, "Hey, here, we've done the work." And I remember going through checking five or six out of, you know, 30 or 40 pages thinking, okay, they got the hang of it.

Arthur Engelhard: (17:44)
Good. All right, let's get into production. And then as soon as that audit ran, there's one page, I don't know if it was me or them made a typo where it's now companay info, like we've misspelled the word company, but my red statement says, clearly it should be company underscore info. And it says companay underscore info. And now I get to look at that for every week until I get it fixed. It's reminding me to make sure to validate every single page and don't cut corners. Okay. But after that, we've got our our specific audits, right, with the very specific rules. And in this case, we're doing things like loading a specific page, maybe clicking on something specific, or landing with a parameter. And we're going to be testing that the analytics tag is passing that one piece of data that's very critical, or that modal showing up because it's so important; the right cookies being set with the right value. And in these cases, you're probably going to leverage you know, very specific rules, maybe even a rule specifically designed just for that audit or even using your custom OP data tag to like fetch cookie values or check for those banners to be present on the page. But yeah, at this point I've now mentioned OP custom tag twice. So I think that's what we're going to get into next, right, Mike?

Mike Maziarz: (19:06)
Yeah, let's do it. I don't think I'll ever look at the word company ever the same I'll always be saying companay now. So, thanks for that, Arthur. So let's jump into ObservePoint data tag. So we talked about Rules now, and that's kind of the core and the foundation of taking your Audits to the next level or even Journeys. But now, with the ObservePoint data tag, this is taking it up even another level. Essentially the OP custom data tag allows you to pull any data into your ObservePoint Audits or Journeys and making them accessible via Rules. So, you can basically, if you find iit n the dev console then it can be sent into ObservePoint via this ObservePoint custom data tag. It's essentially just JavaScript that's executed after we load a page and then grab whatever information you want and then send it through as kind of a fake net request.

Mike Maziarz: (19:57)
So, it's a pretty awesome tool that you can use to validate lots of different use cases. And some of those for example, are gathering more information on SEO data and defining rules around that, as well as page performance. So getting more information from the Dom content being loaded, and then we also have more out of the box scripts as well for custom vendors. So for Tealium, Ensighten, and Adobe we can even gather you more information than what just comes out of our standard audit reporting. Another strong use case here as well that Arthur mentioned earlier is checking for privacy policy and making sure that's available on every page. So we have some out of box examples that you can find on our health documentation, and then our support team, as well, can help you solve even more use cases that may be custom to you beyond that. So let's check in with Arthur and see what he's doing with this awesome tool that ObservePoint offers.

Arthur Engelhard: (20:47)
Yeah, so I think at the beginning, I said, I was really amazed by how I can use ObservePoint to test all the things, all the tags. And then when I realized that with the custom data I can, if it's accessible with JavaScript, I can then test it too, I realized that it's not just everything in the networking tab. I can literally test all the things. It's incredibly powerful because most of the things on the Dom are accessible through JavaScript. So whether that's you want to fetch cookie values, you want to test if a banner's there, the CTA is there, et cetera, all you'd have to do is write some custom JavaScript, Fetch it; ObservePoint will push it through, it looks like a networking request, and then you can write a rule against it. And so, I guess technically you know, our primary example here is GDPR compliance.

Arthur Engelhard: (21:38)
Like, did the banner show up, right? You know, you load the page, you create an audit to load the page from say, like Germany or France. And you want to know, not only did my compliance scripts run, but did the banner actually show up for the user? I could do that audit, and I could open it every day or every week. And visually validate through the screenshot that ObservePoint takes that the banners there, or I could just write the scripts, pass it through a tag, write the rule. And I get that little green check mark. I don't even have to open to every day. I know that it's working perfectly. So yeah, we have it doing things like checking for the GDPR banner, checking for the cookie settings link, checking for our do not sell privacy link for CCPA compliance.

Arthur Engelhard: (22:22)
Let's see with a lot of that GDPR compliance comes like fetching cookie values. So we're making sure that if the user opts out, did we set the right cookies to make sure that the user stays opted out? And we've used it for validating things like the popups and chat bubbles. It was very handy for an issue we had; we were trying to figure out why chat was so low. Recently, we were getting very few chat responses, and we started doing an audit where we looked for a unique ID that was produced when the chat level was present on the page. And we found a couple of pages that you needed to be updated. So it was very handy. I've also done it to do things like make sure that you know, this very important, maybe it's a Black Friday banner or something, you know, make sure the Black Friday banners there on the site.

Arthur Engelhard: (23:09)
I just warn against creating a lot of audits for that purpose because those banners tend to be very temporary in nature. So you want to waY to flag some of your audits that they are also temporary, you know, that this is helpful and we're going to use it for a week or two, but make sure you flag it in a way, whether it's a label or a folder or something that says, okay, this is not something I need to keep an eye on forever. It's just for a temporary purpose. I think we also had a request to like find outdated mailto links. So, like sometimes we had mail links that were pointed to our old mailboxes, and so we wanted to scan our site and look for that. And that was just as simple as scanning the site, using some JavaScript to look for the right hrefs and then export the data and do a little analysis in Excel. It wasn't terribly difficult. So yeah, I think that the OP custom data tag is incredibly powerful and I, I definitely use it as much as I possibly can.

Mike Maziarz: (24:06)
And thanks Arthur for those use cases, some of those I haven't even thought of before, and so from a product perspective it's cool to see those. So the next and final advanced kind of tool that we'll talk about as far as leveraging your Audits to get more value is the remote file mapping. And this is probably one that I would say is probably the most underutilized tool that you can use inside your Audits. And what essentially remote file mapping is, it basically allows you to swap files on your website. So you can test maybe staging or dev files on production or in different environments, but essentially you can test those new files, scripts, or libraries on your website before it's actually released. So it's kind of a mouthful, but let's talk about some examples here of what you can do.

Mike Maziarz: (24:51)
So with that, you can test a tag management system migration. So if you're migrating from an older version to a newer version or perhaps switching vendors, you can also quickly find instead of Audits, to see what tags are maybe loading outside your tag management system. And then essentially how that works is you can see in the screenshot in the left, we basically can look for a a file or a network request and replace it with something else. So in the case here in the screenshot on the right, you can see we're actually just blocking a Tealium instance. And so during this Audit, you'll see any tag that comes back will essentially be hard-coded or being loaded via a different method. So really, really common use case that I would recommend every customer has an Audit that's set up, that's at least doing this. You can also test any updates to your tag manager and even test capability with new jobs or other library files that you're moving onto your website. So let's let's jump into some use cases of how Arthur's using this, and we'll go from there.

Arthur Engelhard: (25:57)
Yeah, I think finding the hard coded tags was a big one. And I remember when we first did it, and I was a bit horrified by many hard-coded pixels we had on some of our pages, mostly our legacy stuff, but definitely a great use case to find hard-coded tags, just block the tag manager, and then you run the audit as you usually would, and it spits back like, "Hey, look at all this, you know, Twitter or Facebook that's out there." It's like, well, I'm not loading it, who is? One of the biggest use cases I've had for AB testing -- I know we've talked a lot about Audits here, but it's in Journeys. Journeys is when you, you tell ObservePoint to go to a page, click on the CTA, fill out this form, click some -- you know, give it these very specific instructions on what to do.

Arthur Engelhard: (26:42)
And our AB testing platforms kept moving the button or changing it or replacing it. And so my Journeys kept failing, and I went back and forth trying to figure out how to solve for this. Do I have to figure out what experiment I'm in and then solve for it? And I felt like, no, I don't have to do this. I just have to block it. So I have a bunch of these that block, you know Optimizely, Google Optimize, and what's the one -- Adobe Test and Target. And so now I know that I don't have to worry about an experiment or anything like that. I know I'm going to get the right experience. Yeah, like Mike said, I've used it to test, you know, what happens if I loaded this dev version of Tealium or Adobe Launch on production today, do I get the same results?

Arthur Engelhard: (27:28)
And what's great about this system that they have with the audits is you can keep your audit intact entirely, you know, as it runs in production normally. And then maybe you just adjust that option to say, no, switch it from production to dev and rerun it again. All your rules will still apply as they normally would. And so you can say, well, yes, I ran it. Everything still passed. Everything's still okay. We had the desired change that we wanted and we can move to production. The other thing that I've used it for has been sort of like blocking specific tags or vendors and seeing how it affects page performance. So I'll have like one page load and another page load, there'll be identical audits, but one of them will block say like Facebook, and we'll say, how does that really compare over time? Does it really affect page performance that much? So yeah with that, I think I'll turn it back over to Mike.

Mike Maziarz: (28:25)
Awesome. Thank you, Arthur. So let's jump into questions that will drill down through all those different use cases and ways to get more value and taking your base Audit to the next level. So it looks like we had one question come in about, Can you set up these Audits to run in different locations? And yes, that is one of the standard options is to not only audit multiple locations just in the U.S. but around the world as well. And so if you need to test a certain experience like a GDPR banner or other banners or different experiences, you can just define your Audit to run from that location. Whether it's in the UK or Germany or whatever it may be. We have that available to you just specifically for that reason. And it's really quick to set that up, and you can set up multiple Audits, you know, to run from different locations around the world.

Mike Maziarz: (29:17)
So, it looks like we're just about out of time here. So appreciate you joining today and thank you, Arthur, for your insights. I'm pretty familiar with ObservePoint in he was doing some use cases that I hadn't thought of before as well. So, it was really cool to see that. Really appreciate you taking the time to do that. And if you have any questions feel free to reach out to either of us directly or to the support team at ObservePoint. And we'd be happy to help you get some of these Audits up. We also have great help documentation as well, too, to walk you through a lot of these use cases that we talked about today. So with that, thank you. And hopefully talk to you soon. Thank you.

Arthur Engelhard: (29:56)
Thank you, bye!

Previous Video
5 Trends Shaping Digital Experience - Tanu Javeri of IBM, Daryl Acumen of Adobe, Adam Greco of Amplitude
5 Trends Shaping Digital Experience - Tanu Javeri of IBM, Daryl Acumen of Adobe, Adam Greco of Amplitude

This panel will discusses what the biggest changes have been, what contributes to failure or success, and w...

Next Video
Journey Maintenance: Test & Monitor Critical User Paths for Functionality - Lucas Pafume Silva, Vivo
Journey Maintenance: Test & Monitor Critical User Paths for Functionality - Lucas Pafume Silva, Vivo

Are you sure those paths are functioning properly and that your MarTech solutions are tracking correctly? L...