Perfecting Release Validation (Americas) - Nick Huang & Charlotte Castillo, Epson

Taking a proactive approach to governing your analytics and marketing tags can provide huge benefits for the accuracy of your implementation, which ultimately boosts your credibility. This session will offer ways you can become your company’s data quality guru with Release Validation and catch analytics errors before they happen, including how to:

  • Address data quality issues before they reach a production environment, with less risk and cost
  • Focus on implementing technology governance in early development environments, such as staging, dev, and QA
  • Instill a culture of proactive technology and data governance

 

Nick Huang

Manager of Business Intelligence & Online Commerce

Epson

Nick Huang is the Manager, Business Intelligence and Online Commerce for Epson America, Inc. In his tenure at Epson, Nick has focused on standardizing data practice across the organization, developing a centralized analytics program to understand and improve customer experience, and creating cross-channel segments to optimize marketing efforts. Nick is also responsible for overseeing the e-commerce operations for Epson. Nick holds a BA in Business Administration from the University of California, Irvine.

 

Charlotte Castillo

Senior Analyst

Epson

Charlotte is a Senior Analyst for Espon America Inc. She has nearly a decade of experience in the marketing and analytics industry and specializes in website & media analytics, tag QA & management, SEO/SEM, and content marketing. Before Epson she worked for Torrid and Conill Saatchi & Satchi as a Digital Analyst.

 


Chris O'Neill: (00:06)
Hello, everybody. Welcome to Perfecting Release Validation, specifically Catching Errors Before They Destroy Your Data. So today with me, I have two very special guests and I'd like to introduce them first. We have Nick Huang, Manager of Business Intelligence in Online Commerce at Epson. And Charlotte Castillo, who's a Senior Analyst, also at Epson. Nick, would you like to say hi Charlotte, say hi real quick.

Nick Huang: (00:40)
Hi.

Charlotte Castillo: (00:40)
Hey Chris.

Chris O'Neill: (00:42)
Hey guys. I'm Chris O'Neill, Solutions Architect here at ObservePoint. We'll be going through some slides and the topic of today's presentation is: How do you eat an Elephant? Just real quick, I'm not sure if anyone's into hunting here, but Nick, Charlotte have either of you guys ever eaten an elephant or animal in that same class?

Charlotte Castillo: (01:06)
Eaten no, I've been hunting, but nothing like that.

Nick Huang: (01:10)
Neither have I.

Chris O'Neill: (01:15)
Horrible joke, horrible joke, but we'll move forward. The age old question. How do you eat an elephant? Does anybody know the answer by the way?

Charlotte Castillo: (01:25)
Very slowly because that's a lot.

Chris O'Neill: (01:29)
Yeah, good. One bite at a time, that's the secret. And if you guys haven't caught on yet, what we're talking about today, the elephant is perfecting your release validation. It can be a very daunting task, I speak to clients all the time that are super excited and they're wanting to eat the whole elephant, and it becomes very difficult very quickly. So we'll talk about how we do this one bite at a time. And then, we'll also speak with Nick and Charlotte. And they'll go ahead and tell us about their experience and how they were able to eat this elephant, so to speak.

Chris O'Neill: (02:07)
And again, this is just iterating on what we were talking about as far as eating an elephant. I think typically, and especially, maybe not on the analyst level, but as you go higher up in an organization when they're wanting to drive QA practice, automation, data quality, what we typically hear from higher -ups is let's go validate everything. Let's improve everything. And that instantly causes paralysis by analysis, usually for the analysts. So, what I want to talk about today is actually something we call our maturity model here at ObservePoint, and you can see the slide is a little messed up on the end, so strategic was not very strategically placed, but essentially what we're talking about is going from that reactive process all the way to a strategic process. And I think intuitively we understand that in our lives, in different aspects, when we're being reactive versus strategic and the differences in stress levels, things like that.

Chris O'Neill: (03:11)
So, the number one problem we're here to mitigate really is bad data. There's a little saying around ObservePoint when the tracking breaks, you're flying blind, that's a hundred percent what goes on when you have bad data, you get added costs, and now with GDPR, CCPA, there's a massive increase in security risk with piggyback, rope tags, possibly tracking users on your site that opted out of tracking, also missing out on tracking users that have opted in. So there's a wide breadth here, as far as all the different risks that we're trying to mitigate with the automated QA. The solution that we've come up with at ObservePoint is, set up automated testing to catch problems before they happen. So this is definitely leaning more towards the strategic versus the reactive. At ObservePoint, we'll do automated testing, or we'll go to your site, we will test compliance, we'll send you alerts when errors occur, and then we can do this in pre-production so we're not just catching errors in production.

Chris O'Neill: (04:21)
And if most of you are familiar with ObservePoint, you know that our two primary scans are Audits. These are the site-wide scans that will be page load events. So we can get tons of insights about page load, time, tag presence, status codes of pages, tags, cookies, all sorts of things like that. And we also have our Journeys, these are primarily to test conversion flows. So you can use these for landing pages, call to actions, and form fills. So let's go ahead and talk about this maturity model. This is actually why I brought Nick and Charlotte on, so we can talk about this a little bit. Let's actually start with the first one, ad-hoc problem solving. What I've typically seen is clients will, or anyone will implement some sort of technology on the site to improve the tracking and improve data-driven decision-making, and all of a sudden problems start to show up. Nick, would you like to talk about some of the factors that can impact that tracking?

Nick Huang: (05:24)
Yeah, definitely. Typically when we think about validation, it doesn't sound that sexy in comparison to other digital analytics, but coming from an analytics background, typically one of the fears that I still have in today's role is that a lot of times when we go into pull the data and you realize, Hey, something doesn't look right for a report that I'm pulling in to present tomorrow, that fear and that nightmare still exists. Because if you think about where your datas are being sent and how you are using datas to optimize your media in all of your digital media investments, and social media investments, if the data is broken, you can't make any decisions upon them. So a couple of factors that can impact that is on the web development releases. So if you do introduce new functionality, and your tracking breaks, that's one scenario.

Nick Huang: (06:17)
The second scenario could be that you made some adjustments within your tag management solution, so when that happens, that's one potential area that your data could break. If you have browser specific variables that you're tracking and some of those browser requirements break, that's also another area that eventually can introduce inconsistency or accuracy within your data, And if you think about how that data is being used, if you have marketing, that's optimizing and dependent upon those custom conversions, it can be very expensive because you might be optimizing towards inaccurate data, not just from an analysis standpoint, but also from an optimization standpoint.

Chris O'Neill: (06:59)
This is interesting, you've identified three key factors that can impact the tracking, TMS publishes, web dev releases, and browser updates. And then you talked about the effect downstream of that, We're now it affects analytics reporting, ads become super costly and ineffective. I'm going to put you on the spot have you ever given insights or recommendations and then later come to find out it was on bad data?

Chris O'Neill: (07:27)
Unfortunately yes, and I think that's a common path from before we had ObservePoint. So one of the things that we had the opportunity to do a couple of years ago was to implement our analytics tracking. And one of the requirements is that we have to have a data validation in place because between the two analysts that we have here, it's impossible for us to QA every single thing under the sun. So the challenge becomes that if we don't have a good automated validation program that we can scale, there's no way we can trust our data. So in other words, I like to think we found ObservePoint instead of ObservePoint finding us because we had that need.

Chris O'Neill: (08:07)
Yeah, this definitely has been a great match fit. Charlotte, I'm going to kind of put you on the spot. Have you ever done manual spot-checking or have you ever been kind of caught with your hand in the bad data cookie jar?

Charlotte Castillo: (08:19)
Yeah, we actually have a couple of examples of where we found some instances of bad data that I don't think, without ObservePoint, we would have ever caught, because I think most of the time people say, "Well, if an eVar falls off or one of our event tracking falls off your metrics immediately go to zero, so you would notice that." But that's not always the case. We actually have a couple of examples where, for instance, an ad tag wasn't firing past their certain browser height, or we lost third party payment systems for a specific device, or maybe there is a third-party code on our site where it's supposed to be turned into eVar, and that wasn't happening. So, there's definitely some instances where it just would have been really difficult to catch. And I think even QA and going through that process, if we looked at the metrics, tagging would have been one of the last things that we would suspect, and probably just would've been like, "Well, we need to optimize more, or maybe it's something to do with our campaigns," or something like that.

Chris O'Neill: (09:22)
Yeah, that's super interesting. Talking about different cross-browser testing, talking about different screens, mobile versus tablet, so many variations. I wonder if anyone's caught the data error in my slide, on the manual spot-checking did that come out with a G at the bottom?

Charlotte Castillo: (09:43)
You know, I didn't even notice.

Chris O'Neill: (09:48)
I'm sure nobody else did as well, that's great! Let's talk about conversion path wondering. Charlotte, let's stick with you. What do you consider a conversion path?

Charlotte Castillo: (10:02)
I think before ObservePoint, we considered our conversion path to be pretty limited. Like maybe you just your e-commerce, mobile and desktop, or maybe just a form submit or something like that. But once you actually onboard with ObservePoint, you're kind of unlimited in terms of the, the journeys that you can create. So we realized we really wanted to be able to track all of our events across all of our journeys. So, in other words we had representation for all of our events in at least one conversion path. And basically, that could mean something like email submissions, scroll milestones, video loads, or plays, downloads, case studies, retailer clicks, and all kinds of other events. We actually went back and we said, "Okay, we actually want to track a lot of these other events." And we basically added them into our journeys so that we would have representation there and tracking across all of our major events or interactions that were represented in our analytics.

Chris O'Neill: (11:14)
Interesting, so if I'm hearing you correctly, as your data quality improved from more robust testing, you guys actually became more nuanced in what a conversion was. So you really went from these macro conversions that you were tracking, now we're getting more nuanced into several micro conversions?

Charlotte Castillo: (11:32)
Yeah, and I think it's really beneficial to even plan for that in the very beginning, if you can. Just because usually when you're onboarding, you have that ability to take advantage of ObservePoint and their staff to basically create all those journeys for you and then duplicate them across all of your staging environments. So definitely if you can plan ahead of time, I think that's a big benefit.

Chris O'Neill: (11:57)
Perfect.I think especially with any sort of technology adoption, planning is definitely the first bite. Nick, let's talk about the second bite. How would you prioritize what you start to test? What are the factors you consider? How do you approach a problem like that?

Nick Huang: (12:17)
For me, the way we thought about it, there's a lot of tags that we deploy on the site, but which ones deserve more importance versus another. So we started with our analytics system because typically those are more consistently deployed across all of our pages and typically contains the most data points. So for us, we started monitoring Adobe analytics against our SDR, because those are the key things that we're trying to measure across the site from our KPI standpoint. And those designs are in accordance with how the site is set up, in terms of how we envisioned the customer journeys of users interacting with our sites. So, if we are able to see those data points being validated accurately, then we can trust that the reports that we see and produce and share with our stakeholders are accurate.

Nick Huang: (13:03)
So, I would start with where is your source of truth, from a reporting standpoint, and where it contains the most consistency within your deployment. So, typically the load rule is you load it across all the pages, so you capture the journey that's meaningful to you. One of the things that we've done that is a little bit different is we are using Tealium IQ as a tag management solution. Tealium has a integration with ObservePoint using the ObservePoint data layer validator, which is basically able to allow us to audit both our on page data layer, as well as any variable that we create through JavaScript, lookup tables, or even pulling from a cookie. So when we validate that, we know if the data that's being passed and collected from a time measure, solution is correct.

Nick Huang: (13:49)
The downstream data that you pass through will be correct to your ad tags downstream. So we validate against these two tags from a prioritization standpoint because they contain the amount of data that we need, and also typically those are the most important data points that we captured. The second thing from a conversion pack determination standpoint is tying back to the key customer journeys that the site was designed for, to make sure that those are hitting our expectations. The second consideration point is that if your marketing spend is optimized against certain custom conversions, that's happening on the site, then you want to make sure that those are always accurate. So the second thing that we would think of within our journey is, "what are those KPIs from a measurement framework standpoint?" Especially if you have them in a more downstream conversion, for example, if you're optimizing your paid search campaign against certain actions, you want to make sure those actions never break. So, from a prioritization standpoint, I would think in two dimensions.

Chris O'Neill: (14:53)
Fascinating, this last piece that I thought was interesting; So, when you're determining how to optimize, you're optimizing on essentially dollars, right? What are the biggest dollars, where's the biggest spend going and how can I get the biggest ROI?

Nick Huang: (15:10)
Correct.

Chris O'Neill: (15:11)
Yeah, Tealium IQ is a great tool, very strong integrations. We're building more with Tealium, I see a lot of people using it at Tealium IQ. You talked about data layer testing. That's pretty interesting. How do you guys think about data layer Especially as, the conversation starts to change to server side, or to the cookie list or, all these new things that are being introduced, Do you guys have any thoughts or is that a discussion that happens at Epson?

Nick Huang: (15:38)
It's something that we're working internally on and trying to get more research and get a better grasp as to what's happening and we're planning for that, but we don't have anything that's currently available for sharing.

Chris O'Neill: (15:53)
Don't want to tell us the secret sauce, that's smart. Let's go ahead and jump into automated tracking validation. Charlotte, let's talk about, I think the most important part of the automated tracking is the alerts. You know, we definitely want to automate the scans, but the alerts are what give us the insights and the alerts are what drive action. Talk to me a little bit about how you decide where to apply and how to use them.

Charlotte Castillo: (16:24)
Sure, what I love about the ObservePoint system is that it's really easy to just come in and see exactly where you have journey breaks and where you have roll breaks. I like using the alerts as well, specifically for when an e-commerce journey will break. Because obviously the most important thing that we care about on our site is the checkout. So we do have alerts around those specific journeys and also around certain rules that we think are really important, we have alerts applied. So in other words, you can either set it to be sent an email or also to be sent a notification in Sauna or JIRA or whatever system that you may use.

Charlotte Castillo: (17:09)
I was going to mention something else about conversion path monitoring. Something that we did was we actually tried to go through and do a manual spot check imitating what we were going to be getting with ObservePoint. And I think it's a really important thing to try to do, because once we sort of looked at how much time that was taking, we tried to log the hours that, that actually took to do a full check for page view and event variables across all of our journeys, as well as for a subset of pages. So for instance, a hundred pages, we did a page view check for all those variables. And once we did that we realized it was taking, I think 200 hours or something like that, but it was really good to do at least once, because it really allowed us to validate how important it was to have ObservePoint because obviously none of us could dedicate that type of time to doing it. So I think that's a really important thing to try to do at least once.

Chris O'Neill: (18:14)
Yeah, that's an interesting strategy, let's talk about that. So instead of just implementing automation and trying to build out your QA strategy from the automation level on, you're saying you went in and you manually got in there, you understood what was going on on the site. You tried to set it up manually and that's what drove your implementation strategy with the automation. Is that what I'm hearing?

Charlotte Castillo: (18:40)
Yeah, I think so to a degree, and it also allowed us to see how much we would catch, like how many things or issues did we find by just doing that manually, page by page. And it also was another layer of check for us because once we had ObservePoint turned on, we realized that Yes, we expected to find these things, but then there were also additional errors that we just didn't catch because obviously when you're looking at hundreds of pages and hundreds of variables on each page, you can miss a lot. It definitely allowed us to prove the value of ObservePoint in the very beginning.

Chris O'Neill: (19:19)
Okay, great. So doing it manually also gave you a little bit of a baseline. So now you understand, okay, this is actually very valuable, we want to focus on these types of automations. I think a lot of people apply the Pareto Principle here, like 80/20. How did you guys determine which tests should be automated versus which tests should be manual? And was that a discussion that even took place?

Charlotte Castillo: (19:47)
I think it's pretty much all automated now. Nick, maybe you can speak to this, but I think the only time that we would manually check anything is if we are finding an error in ObservePoint and we wanted to do a Sightline check to make sure that the error actually exists on page. So I want to say that we're pretty much automated now. Nick, do you have anything to add to that?

Nick Huang: (20:15)
Well, I would say it's a progression. Again, coming from my background, my security blanket, if you will, It's really Omnibug like Charles Proxy. But I think it is a process that we are trying to move the manual validation piece into all automation because the issue with manual validation is that you can't scale. The only time we still do it now is when we're actually setting up a couple changes with our tag management solution. And we're emulating that in the dev environment, in that case, then we will check manually within the debugger to just to make sure things are okay, but anything that we want to do at scale, we don't want to be doing that manually. So I think it is a progression away from manual, but at times we do find it helpful to troubleshoot a couple of things here and there.

Nick Huang: (21:00)
But even as a best practice for myself, I'm trying to move away from doing that because within the Journey itself you can also compare different versions, because that's one thing with manuals that you won't catch. Because unless you keep all your logs, it's very hard to see that. So one of the points I do want us to talk a little bit about is alerts and finding out where to look is that typically when we think about what development releases you want to check pre-prod and production. So anytime those things don't match, you want to be able to catch those. However, I think on larger releases from a tag management standpoint, there is a tool that we use called Remote File Mapping, which is an ObservePoint tool as well, that we can point to a dev version.

Nick Huang: (21:42)
So when we do make a lot of changes, we can say, "Scan the production side again, but use this profile instead, to make sure everything is aligned." The third thing that might have some impact into false alerts that we have seen in the past is that on the side we run a lot of site testing and optimization. So we've caught early on that in certain instances where the crawler was actually qualifying into experience, So let's say you're running a test on your landing page. And one of your key journeys is to watch for whether or not video plays. Now, if one of our experiences doesn't have that video, well, you get an alert that the journey is broken. So the way we solve for that is you can exclude the crawler based on the user agent stream. And we build that into our process to make sure that we reduce as many false alerts as possible. So when we do receive alerts and breakage, those are the ones that we want to spend our the time looking into. So I think specifically around finding out which alerts we should be paying attention to, those are a few of our techniques that we've deployed within our process.

Chris O'Neill: (22:47)
Interesting, Charles Proxy, Omnibug, very old school. I respect that, Nick, that's great. I just want to summarize, make sure I got it right. One, I heard you guys started using automation around validating, AB testing. And the key to that with automation was forcing the test and knowing ahead of time, which experience you were sending it to. Is that correct?

Nick Huang: (23:14)
We actually do a suppression, so you'll always see the control. When it sees the control, we know that unless something within our development environment has changed the functionality that it's actually a real breakage versus if it's just being thrown into one of the experiences, it's harder to predict. So in a way we're just excluding the crawler from ever seeing a test experience.

Chris O'Neill: (23:36)
I heard you mentioned Remote File Mapping. Remote File Mapping, if you've never heard of it before, that name is probably the most confusing name out there, but Remote File Mapping essentially allows you to, as ObservePoint is going through and scanning your sites, swap out one piece of JavaScript for another piece of JavaScript. So I think the example you brought up was testing a staging environment inside of Tealium. So we wanted to go ahead and push the changes in Tealium maybe to stage two, we wanted to test those before we pushed it live on the site. Was that the primary use case you just described Nick?

Nick Huang: (24:15)
Correct. Correct. Especially if you have a large release that has a major impact, you want to make sure that you fully test those in that way, but you also don't want to create just a brand new journey. So you can duplicate and change a reference and then still simulate one of the prod journeys, by just referencing the different profile. That's correct.

Chris O'Neill: (24:38)
Random off the cuff question, but have you guys ever leveraged Remote File Mapping to swap out your TMS JavaScript for a blank file, therefore not loading the TMS on any of the pages to isolate all tags that are firing outside of that container?

Nick Huang: (24:56)
We have not. I have not, at least.

Chris O'Neill: (25:00)
I've seen some clients use that super helpful. I mean, it depends on where you're at, if you're trying to do a site cleanup and you're not sure what tags are firing outside of your container, it's easy to scrape for hard-coded tags, but we all know there's piggyback and there's rope, there's piggyback on the hard tags and those are a little bit harder to scrape for. So, sometimes we can just swap out that TMS code for blank file and then only have tags fire that are either hard coded or initiated by a hard-coded tag or somewhere down that note.

Chris O'Neill: (25:31)
And this drives into the last part of the conversation I want to talk about, process, workflow, we've discussed that, but also just being more strategic in general. So looking forward for the last couple of minutes I know one thing that you guys mentioned was interesting and you kind of brought this up briefly but let's talk about privacy, let's talk about consent management. How do you guys think about opting in, opting out, the tracking, the categories, how do you guys test to make sure that all the regions are being served the appropriate consent strategy?

Nick Huang: (26:09)
Yeah, especially with CCPA, this is a hot topic within our industry around what can you collect and how do you monitor a user consent as they browse throughout the site? So the way that we have deployed our consent managers, our users have the option of opting in or opting out by specific tags or tag categories as they browse through the site. At any given point, the user actually has the ability to modify their consent preferences. So for us, what we're working on currently is to build those consent preferences within the audit. So you can simulate a journey when someone comes in, they accept the consent preference, and then we ensure all the tags fire, but on the inverse, if a user does opt into a specific category, but not others, we want to make sure that those rules are honored, because those are controlled by our tag manager as well. So we have to run these simulations to say, if a user is five steps through a journey and they go in and they change their preference, on step six and step seven, we want to make sure the tags that we don't capture are just as important, and that these are being adhered to. From a region standpoint, my team has responsibility over North and Latin America. As you know, Brazil recently introduced a law that went into effect that functions to change the definition of that slightly from CCPA, or depending on how your legal device is used of course. So we have to audit for different functionality of the consent manager. So those have to be built in at the journey level. So then we know what the nuanced differences are and make sure its maintained that way throughout.

Chris O'Neill: (27:55)
Yeah, I know we spoke offline about this, it's so complicated. GDPR, you have to often opt in, CCPA, you're already opted in, you have to opt out. And I don't even know with LG PD, that's the Brazilian one. I'm just glad I know the four letters, but I don't even know the nuance with the opt-in, opt-out there. I hear Nevada has a law coming and possibly Canada as well. Just briefly for the last minute, how do you guys plan for the future around all of these different privacy laws coming in? Have you guys thought about a strategy or thought about how to eat that elephant?

Charlotte Castillo: (28:31)
We were just chatting about the new consent preferences and that new section in ObservePoint. So yeah, we're excited to explore that for sure.

Nick Huang: (28:42)
And I think from a technical standpoint of how to deal with these within the organization, a lot of times it's a collaboration between us, legal IT. To make sure that we're in alignment of how the law is being interpreted, what that means downstream in terms of implementation and what we're allowed to collect. So a lot of these will depend on your interpretation of the law and how it's being implemented. And then on the tagging side, where we have to make sure that our rules match those definitions, and on the audit side we have to catch those nuances. So I think I think these have to work together to ensure that from a technical standpoint, we are functioning correctly, but from an audit stand point we're testing those rules as well. I think that's where it's a little bit complicated, but it is a collaboration.

Chris O'Neill: (29:35)
Yeah, I think it's interesting over the last 5 to 10 years, you've seen analytics is now becoming extremely cross-functional. We really are the glue that holds it all together. Well, that's our time, I kind of wanted to talk a little bit more about privacy, but Nick, Charlotte, thanks so much for joining us today and talking about how you guys eat an elephant at Epson.

Previous Video
Governing Analytics Tags at Scale (Americas) - Eric Van Pyrz, M&T Bank
Governing Analytics Tags at Scale (Americas) - Eric Van Pyrz, M&T Bank

Discover the right solutions and processes to govern your tags at scale. Using ObservePoint to do so will h...

Next Video
Data Governance Tips & Tricks - Jordan Avalos & Pam Frei, Southwest Airlines
Data Governance Tips & Tricks - Jordan Avalos & Pam Frei, Southwest Airlines

Utilize lesser-known features of ObservePoint for data governance: LiveConnect, Action Set Library, Webhook...