Successfully Migrate Your MarTech in 4 Phases - Chris O'Neill & Jarrod Wilbur, ObservePoint

Switching or setting up a new MarTech solution isn't always an easy task. Whether you’re moving from one tag management system to another, switching web analytics solutions, or adopting an alternative marketing technology for your website, it’s important to know what steps will lead to a successful migration for any MarTech implementation. In this session you'll learn how to:

  • Overcome common obstacles associated with switching your MarTech solution
  • Execute during the Development, Staging, Production Models as they apply to your analytics implementation and overall migration
  • Effectively migrate your MarTech solution using a 4-phased approach

 

Chris O'Neill

Solutions Architect

ObservePoint

Chris O’Neill is a full-stack web developer who has been in the analytics space for the past three years. As a Solutions Architect at ObservePoint, Chris helps large enterprises set up automated testing of their analytics scripts with ObservePoint so they can verify custom website tracking is always up and running.

 

Jarrod Wilbur

Solutions Architect

ObservePoint

Jarrod Wilbur is a Solutions Engineer at ObservePoint and oversees ObservePoint’s Script Services activities. Jarrod has worked on a variety of automation processes in the education, banking, and business intelligence sectors. His three favorite letters are A, P, and I, especially when put together in immediate succession.

 


 

Chris O'Neill
Okay, welcome everybody to the last breakout session of the technology governance, what's it called Jarrod path?

Jarrod Wilbur
I believe is what they're calling it.

Chris O'Neill
Yeah. Awesome. So today we're going to talk about how to successfully migrate your MarTech in four phases. I'm Chris O'Neill, Solutions Architect at ObservePoint, and we have Jarrod Wilber, Solutions Engineer at ObservePoint. Fun fact Solutions Engineer is actually the smartest person here at ObservePoint and Jarrod had a very astute observation. Jarrod, what was your question about my profile picture?

Jarrod Wilbur
Why do you look angry?

Chris O'Neill
Yes. And my response Jarrod is why do you look confused?

Jarrod Wilbur
Why do I look confused? I don't look confused, I just look like I got pulled out of a room to get my headshot taken, I wasn't expecting to do so. Probably confused is the appropriate thing there.

Chris O'Neill
Okay. So I just want to throw in a few jokes last session of the day had to make fun of ourselves. But yeah, we're going to talk about how to successfully migrate your MarTech in four phases. Really excited to have Jarrod on the phone. Jarrod's probably done the most migrations and implementations of our clients here at ObservePoint. So I'm gonna probably pepper Jarrod, with a ton of questions. He's gonna have some great answers. He'll probably pepper me with some questions. I'm going to have horrible answers. And then hopefully you guys drop some questions in the chat. We'll make this super interactive. Jarrod, do you want to kind of give us an intro? What is the four phased approach? What does that mean?

Jarrod Wilbur
Yeah. So what we're going to be talking about today is the migration of different technologies across the site. Now for the presentation day, we're using DTM as, obviously, DTM to Launch was a huge undertaking for anyone that had DTM implemented on their site. At this point we're at the tail end of that implementation process, but we're going to be using that as an example here. So this four phased approach is, , answering that question of when we're implementing new technology across the site, what are methods that we can use to ensure that the actual data quality, that we don't lose that continuation of data quality during that implementation. We want to make sure that, , we keep at least parity, right? That the goal of a new implementation is at least to keep parity, with the data that we're collecting and the technologies that are there and hopefully, , and be improving the technology of our experienced data collection as we're doing that implementation. So, the goal here is to give hopefully some insights that we've found here at ObservePoint of this four phased approach to strategize that implementation so that it goes smoothly.

Chris O'Neill
Perfect. Now, you mentioned DTM to Launch, and DTM doesn't exist anymore. So for those of us who only know Launch, could you kind of explain why is that important and why are we using that as our outline, and then how is this going to apply to all migrations?

Jarrod Wilbur
Yeah. Specifically, I mean, for those who weren't familiar with that huge, you're asking just for those who weren't familiar with that entire ordeal, essentially?

Chris O'Neill
Yeah. What is DTM? What is Launch? Yeah, just for a newbie.

Jarrod Wilbur
Specifically those technologies are container tags. When we talk about container tags where we're speaking to technologies that essentially help to manage the implementation of other technologies. So, you can create those rules and those parameters around a container to fire off other tags. It's a tag manager. I mean, hopefully if you're here in this session what a tag manager is, at this point. Looking to avoid if you're not familiar with it. So, with DTM to Launch, there was an incredible array of improvements: extra data with events, asynchronous calls, things that Launch implemented, that Adobe released that was an improvement over DTM. But just like all technology, at least Adobe, is not able to support DTM and therefore had the need to sunset DTM and make sure that Launch was the primary container across all of their customers.

Jarrod Wilbur
Now they put a pretty tight timeline, Obviously they changed that a handful of times, but they put a pretty tight timeline on that. In terms of technology decommission, as far as a normal decommission would go, they kind of crack the whip on it, which is excellent to move, but it did put a lot of pressure, especially on a lot of our clients. And so a lot of our clients came to us asking, , how can we leverage ObservePoint to ensure that when we do that process of new implementation of Launch, are we losing things? Are we not keeping that parity? Are we not keeping the standard that we had before? Is there data that we're now losing, because we're moving to Launch, how do we test all that so on and so forth. So there's lots of things that we can do to help in that process. I guess that's what we'll be kind of focusing on today is are those things.

Chris O'Neill
Yeah. Perfect. So, the most important thing I gathered from that, Jarrod is if you don't know what a tag management system is, you in the wrong session, is that right? Just kidding. Okay. So yeah, we're going to talk about the four phases of any sort of tech migration, but we're going to specifically the example of DTM to Launch. And then I think the parity, will make sense as we go along. So let's talk about the first phase. You can see here, we have catalog and we have two choices manually or automatically. First of all, Jarrod, what does catalog mean?

Jarrod Wilbur
The better term might be baselining, right? The goal is to create documentation and baselining is not the only thing here either. We want to make sure that we understand what our current implementation looks like. We want to catalog and document what the current implementation of technologies looks like across our sites. So we know we have a reference to point at, after the implementation has occurred. Particularly when you're talking to your boss, right? You want to make sure and be able to say we successfully implemented this type of new technology and it didn't cause any hiccups of , no teams were impacted by this new implementation of the new container tag. So, documentation or cataloging for baseline purposes for referencing at the end of implementation.

Chris O'Neill
Yeah. I agree. I think this should be called cataloging rather than just catalog. Well, let's move to the second phase and let's talk about "Strategize." I mean, this is obviously a super hot word, everyone overuses it, but what are the considerations for you as you think about a tech migration? How do you strategize, what are the big picture questions that you're asking yourself?

Jarrod Wilbur
That's the question that all technologies always have to ask. Especially when you're looking at new technologies, right? Is it better to build in-house or is it better to buy off of a shelf, right? So that's the difference that we're looking at here with the two different strategies, particularly in the following phases, but in that cataloging phase as well. Do we use a method that we manually document and baseline all that? Do we manually go through and try to document all the pages that currently have that technology, and where they live and the rules that are based around that? Or do we use a third party to create that baseline for us, but we use an automated process, like ObservePoint to go out and determine those technologies at baseline.

Chris O'Neill
Okay. Let's press on this. Let's actually get a little more tactical, let's tailor this question for someone who maybe is an ObservePoint client. Would you use audits, or would you use journeys in attack migration or a technology migration?

Jarrod Wilbur
The answer to that, I guess I would say is depends on your focus, especially on your site. So an e-commerce site you're going to have a bunch of product detail pages. You're going to have a lot of different, your key flows are going to be absolutely critical to ensure that the data collection. Whereas if we're talking about other that, you have critical flows. You're talking about a little bit more just standard marketing landing pages. Maybe you have some forms that are critical, but generally speaking, we're just talking about the general technology across, that would be more towards audits for those who aren't familiar with ObservePoint audits are like a spider crawl through the site where we're doing general page loads of linked pages or landing pages, and just a standard page load event and collecting the tags that fire as a result of that page load.

Jarrod Wilbur
The other aspect that I was talking about with those flows, that would be a little bit more for journeys. So journeys being, what are those critical paths? I'm going through a booking process or a purchasing flow process, and as I'm going through that process and the user is clicking on certain things to add to a cart, or they viewed an impression or whatever it might be that, all these different events that are firing along a path that is going to be much better suited in ObservePoint what we call web journeys, ensuring each action has it's appropriate tagging.

Chris O'Neill
What I'm hearing is on a tactical level, if we're doing a migration that really is just needing to migrate tag presence on page load events, audits, very easy, very useful. If we're more targeted, we need to really ensure that the user journey through a specific flow, like a checkout flow or a form fill, you mentioned for those who are gonna want to leverage a lot of journeys.

Jarrod Wilbur
Correct. That is exactly correct. And then both of those would be then leveraged to ensure, we can then take an inventory of those tags in both of those tests or either one, depending on the nature of your site and use that as a baseline against future testing when we've actually completed the implementation.

Chris O'Neill
Oh man. I feel like I'm in school and I just got the right answer. So thank you.

Jarrod Wilbur
You did a good job, O'Neill. I'll send you a Motivosity buck.

Chris O'Neill
There we go. There we go. Let's talk about phase three. This is interesting. So when we're talking about the actual migration, there are two approaches. There is the Lift & Shift, or there's the fresh start or Start Fresh. Do you have a preference or how do you decide which approach to take?

Jarrod Wilbur
The answer is the fun answer of it? Depends. I hate giving that answer. I feel like I give it far too often, but it does. It just, it depends.

Chris O'Neill
It's smart. It's a smart guy answer.

Jarrod Wilbur
It's the nice neutral answer that gives nice wiggle room, but it's the honest answer here, because it depends on the complexity of your current implementation. Especially if we're talking about DTM to Launch, we're replacing a technology not just putting in a new technology. If your DTM was extremely complex, if your rules inside a DTM were just extensive and very detailed and you've been taking the time to actually brush up on everything, a fresh start is going to be a pain. We're going to be implementing the new technology from scratch, it's not an easy task to do. DTM, thankfully Adobe did provide good resources to allow for that Lift & Shift process to be pretty darn smooth to import all those rules into Launch. But if your implementation was pretty simple and basic, a fresh start might actually be a good way to settle down and rethink that implementation process. If it's not a mess and if you don't have a lot of rules, then a fresh start might actually be a pretty good option for you.

Chris O'Neill
So if there's a high level of complexity and we have reusable parts: Lift & Shift. If it's too chaotic or if it's very simple and there's really not a lot to pass on, you would prefer the fresh start?

Jarrod Wilbur
Yep. That's, that's what I would say. Obviously we're using DTM as the example, but this is applicable to any form of new technology or any new shift in technology.

Chris O'Neill
Yeah. Perfect. And I think the recommended approach with a DTM to Launch specifically was Lift & Shift, right? They recommended "upgrade to Launch" in DTM, find your new property, test development, test staging, link, and then push to prod. So, that makes a ton of sense. A tag management system could be complex. I can see that a Start Fresh. I think that's pretty self-explanatory a lot of us Start Fresh with a lot of technologies. Let's move on to phase four. Let's talk about testing. This is the most interesting part of migration. Talk to me about testing. Where do you want to start?

Jarrod Wilbur
Where to start? Well testing, that's why we started this entire thing off with that baseline. We want to ensure that now that we have that documentation, but whether that be an SDR or whatever you use for your documentation for tag governance, the idea of testing now is - well I was about to say now that we've finished the implementation, but I'd be doing a disservice by saying that you should be testing only when you've finished your implementation. I think actually I heard that a lot of, "Oh yeah. We'll we'll test one and we're all done." I think that's actually an incredibly wrong approach, personally, to take. You're assuming that through the process you're not gonna make any critical mistakes in your implementation. The idea is as you're implementing new technologies you should have a QA process that goes throughout the entire implementation phase, that we're constantly monitoring that new technology to ensure through the entire process that it's being implemented in the manner that we're expecting, particularly in lower environments.

Jarrod Wilbur
That's a thing I'll hit on as well. New technology implementation, whether that's tagging, I mean, that is technologically the standard,. If we're talking about tags, if we're talking about new deployment of a backend system, whatever it might be, a staging environment is absolutely critical. If you're not testing, you're tagging in a method of lower environment testing, you really should be thinking of that. Really be implementing that. Doing testing and prod, I have seen it and it's a nightmare. I kind of shake my head a little bit at it whenever I do see it because you can cause lot of headaches, a lot of heartache and a lot of lost resources by testing and production. You're doing yourself a disservice.

Chris O'Neill
Okay. A lot to unpack there. First, what I'm hearing is if you were Elon Musk, you would not test your rocket mid-flight, you would test it before somehow ?

Jarrod Wilbur
I would test every single part of the engine. If I'm in Elon Musk and we're talking about SpaceX, I would test each individual part independent as I'm building the rocket, not just build all the parts, put the rocket together and then, does the rocket work? No, you want to make sure that each part as you're building it is working according to its specs. And that's the same thing with technology, same thing with a car. I don't go to the mechanic and they build everything and turn on the engine, does it work? You want to make sure that you inspect every single part to make sure that it works independent.

Chris O'Neill
Interesting. Okay. It's very Toyota way of you, by the way. I'm not sure if anyone's read that book, but test as you go is one of the main principles of that book. What I'm hearing is not only should you test downstream when in lower environments, but you should also be testing as you go so that you don't get to the end result and then run a test and then have to work backwards and then go find everything and fix it. You're saying the more efficiently is test as you go?

Jarrod Wilbur
The largest issues I've ever found are generally those who have the strategy of testing at the end. You think that you've thought of everything. We all know when doing a project, you for sure have not always thought of everything. So with a mentality of, "I have thought of everything" and then implementation, that final result is not going to look very good. You're going to be dealing with larger issues and more comprehensive and global issues if you don't test along the way.

Chris O'Neill
And then kind of going along with this theme, I'm going to unpack a few other things that you said, that the testing actually starts at the catalog phase. So while you're cataloging, that's actually the first step to your testing. And then you said you test and then you said, even when you're done implementing you've migrated, you're not done testing. Now you're going to test going forward in the future. And each step builds on it on the previous step. Am I unpacking that correctly?

Jarrod Wilbur
That exactly is. I use a lot of words. Chris is very good at summarizing my long-windedness. That's perfect. That's exactly what I was saying.

Chris O'Neill
Well, those with high IQ are generally more of a cost, so I think you're fine. I know this is getting a little too intimate. Any questions in the chat? I do want to check that real quick. I don't think I've seen any so far. Okay. If there are, just go ahead and rewrite them in there and we'll answer them.

Jarrod Wilbur
We will have a Q&A at the very end of this, by the way for those on the call.

Chris O'Neill
Oh, I was trying to test as we go, Jarrod.

Jarrod Wilbur
If you have questions along the way you can ask, but we will have a dedicated Q&A at the end.

Chris O'Neill
Talk to me about data layer. How does data layer play all of this? So data there's is not a technology. Why are we talking about data layer in a migration?

Jarrod Wilbur
Why would we not talk about data layer would be the rebuttal question to that. One, if you don't have a data layer, I've thankfully only seen that maybe once or twice ever. Boggles my mind to not be leveraging, especially with the technologies like Adobe Analytics or Google Analytics or whatever technology it is you're using, or Launch and DTM, however you want to say that. The data layer of validation is probably one of the more critical things, I actually do feel like is probably the least utilized testing that is done at ObservePoint, that should be leveraged a lot more. Testing the data that is actually flowing to the tag is excellent. But your data layer is your core object that you're referencing and building all of those events, everything that's firing.

Jarrod Wilbur
I actually feel that I see a lot of times just tag validation and no data layer validation, which is, I would say almost dangerous. You could be testing the tag, but there's lots of things that I guess I could say, but you should be always testing that data layer very consistently ensuring that if you do have event objects that are flowing into them, if you do have certain attributes that are flowing into them, that you are especially with ObservePoint, it's not very difficult to create those alerts and those rules around those data layer objects, and ensuring that they're meeting those standards. But as the page loads that they're being built correctly.

Chris O'Neill
So really important, Jarrod, I caught you data layer shaming, that's okay. I've heard people refer to the data layer as a single source of truth. So I guess I'm going to ask you, if you could only test one thing, analytics tags or data layer, which would you choose and why?

Jarrod Wilbur
I only get one choice. I can't choose both? I mean the data layer is, like you said, the source of truth. That's a softball answer you gave there, right? I mean, it is the source of truth on so many different levels, therefore the data layer validation is probably one of the most critical things that you could be doing. I mean if you, hopefully you understand how the data layer works.

Chris O'Neill
Let's talk a little bit about Carnival. They had a nice use case back in the days of DTM, they did a little DTM to Launch migration. Jarrod, were you familiar with this?

Jarrod Wilbur
I actually wasn't involved with Carnival's implementation. I definitely heard plenty about it. I know they did an excellent job and they did leverage us very, very well. That strategy and catalog phase that we were talking about, they took that to heart that's for sure.

Chris O'Neill
So I think there were five main points of why they prefer using automation in their DTM to Launch migration. One was maintain consistency. So I heard you mentioned this a couple of times, you definitely talked about it during your data layer, love fest. Documenting, I think this is probably the catalog phase is that what you would say, maintaining consistency? That's where it starts is automating the cataloging?

Jarrod Wilbur
Yeah, keep going, keep going, sorry.

Chris O'Neill
I was just gonna say, after that we're now talking about handling volumes. So how much are we kind of cataloging? So obviously, I think it's pretty clear, manually doing this has a lot of limitations. Automation is more comprehensive working efficiently with one of the resources that screams automation establishing the baseline with web audits. I think that's what you talked about with catalog.

Jarrod Wilbur
That's exactly right.

Chris O'Neill
Perfect. And then validation, was that phase four? Where we were finally doing the testing, and then I think he left off a point, which you brought up is leveraging your documentation that you had done for the testing of the migration and turning that into ongoing monitoring.

Jarrod Wilbur
Correct. I guess that's actually something that we're not even covering here in any of these lines, is that ongoing process now. This whole four-phase approach shouldn't be a single phase approach, if that makes sense. We shouldn't be only doing these a single time because we know one thing about technology and that technology fails us sometimes, especially when we're talking about an entire website. A system like that is incredibly complex with a lot of hands in the cookie jar. We see errors constantly. So that constant monitoring, especially since you've already set up this process to do a validation, now it's just simply setting up a process to constantly run those tests and that validation, hopefully around your release cycles, the point in time where your site is most vulnerable.

Chris O'Neill
So, if you were to clean up your room, you're saying, go ahead and leverage all that work you've done. And go ahead and let it continue on to maintain that clean room. Is that fair?

Jarrod Wilbur
You clean your room once a week, you don't clean your room once.

Chris O'Neill
We should make a slide for that. So, Tim Crandley, at the time he was Head of Global Marketing there, and this was what he had to say that, ObservePoint was uniquely positioned to help anybody who needs to transition between these two tools, specifically DTM to Launch, make it super easy. But it sounds like this could work for any two tools. We're at the four minute and 16, second mark, so per Jarrod's instructions, let's open it up to Q & A. Jarrod, do we have any Q's? Trying to look here. Let's see if I can find any Q's. I don't know if I see any Q's. Jarrod, do you want to make up a question or should I make up question?

Jarrod Wilbur
Where do you get your haircut?

Chris O'Neill
Great clips. Where do you get your haircut?

Jarrod Wilbur
I don't believe that.

Chris O'Neill
Okay. Jarrod, let me ask you this. What is, what was the most complex migration you helped with and why?

Jarrod Wilbur
That's a tough question. I feel like all them had their own complexity. None of them stick out in my head nor do I really think I have probably permission from any of them to call them out specifically.

Chris O'Neill
Let me ask you this: when you're doing this migration and you're using an automated technology, how important is the documentation phase and are you exporting things? Are you sending things to other groups? How does this work cross-functionally with other groups?

Jarrod Wilbur
I mean, you've done demoing with solutions engineering, I've done demoing with solutions engineering, and I think one of the most interesting, it makes it a little difficult is that aspect of, as cross-functional teams in an organization leveraging our tool, because we do a lot of different testing. A lot of teams can leverage us. It does create a challenge to make sure that each team is getting their individual needs met, but especially on these implementation processes. Trying to think of the best way to word this, Chris?

Chris O'Neill
No, it's okay. Let me ask you this. Yeah. You mentioned you would QA the data layer. You mentioned we would QA the tags. What else would you QA? What else would you grab? You kind of alluded to possibly scraping things. What else would you bring in as part of this process?

Jarrod Wilbur
Yeah, privacy is particularly, especially. We're really excited here - in two minutes we're about to get cut off and we'll go into the product release, but we're focusing a lot on privacy, particularly with cookie information, full request log information, and performing different privacy, a lot of teams are especially, any teams involved with new regulations and kind of making sure that your organization is compliant with those new regulations, it's a big deal, and we're addressing it pretty heavily here at ObservePoint and we will continue to address it. We're really excited to announce these new product lines here at ObservePoint to you here in a minute. .

Chris O'Neill
50 seconds. I want to just ask one more question. So you're saying there are parallels to doing a technology migration to implementing a consent management platform, for example. So what would you, what would you catalog? What would be your phase one?

Jarrod Wilbur
Correct. Phase one is just doing a standard anonymous user coming to the site. What does that look like when someone hasn't consented and then doing a comparative testing against that and setting up a session of a user clicking on accepting cookies, and then doing a comparison of what tags are firing between those things, someone who has consented and someone who has not.

Chris O'Neill
Okay. , I'm going to ask you a question and then don't answer cause we're out of time, but does that also include cookies? Don't answer. I guess we'll have to find out next time. Thanks, Jarrod. You were great. Appreciate it.

Previous Video
Data Governance Tips & Tricks - Jordan Avalos & Pam Frei, Southwest Airlines
Data Governance Tips & Tricks - Jordan Avalos & Pam Frei, Southwest Airlines

Utilize lesser-known features of ObservePoint for data governance: LiveConnect, Action Set Library, Webhook...

Next Video
How Linking Digital CX with Revenue Can Predict Millions - Shane Phair, Decibel
How Linking Digital CX with Revenue Can Predict Millions - Shane Phair, Decibel

Quantify digital customer experiences to establish an explicit link with revenue performance and discover t...