Have you ever wondered how you could streamline your analytics QA process and ensure better data quality—and do it automatically? Gabriele Sibana, Account Manager at BitBang, and Mike Fong, Senior Consultant and Solutions Engineer from ObservePoint, have teamed up to show you how!
In this webinar, How to Improve Data Quality for Better Customer Experience, you'll discover how to:
- Create and implement a tag governance process
- Improve data quality through automated QA testing
- Integrate data governance seamlessly into your release cycles
- Maximize efficiency in your analytics QA process
Welcome. Thanks for joining our webinar, How to Increase Data Quality for Better Customer Experiences. I'm Kelly, Marketing Manager here at BitBang and I'll be here to host the webinar. Today with me, I have Gabriele Sibani, Account Executive at BitBang and Mike Fong, Senior Consultant and Solutions Engineer at ObservePoint. Morning to you both. Glad you're both here with me today. So I'll quickly go over a few housekeeping matters before we start. Firstly, thanks for joining us. We'll try to keep within about 35 to 40 minutes for the presentation, which will leave us time for any questions to be answered at the end. Please submit any questions you have in the question box below and we'll respond to these. At the end you will see an attachment section, a couple of documents which we'll explain in some detail, the Tag Governance Framework, so please feel free to download this. The webinar will be recorded so you will be able to view it on demand following our live session, so feel free to share the link with your colleagues who can't make it today. Any feedback you have on this webinar we would love to hear from you. And finally, the icon in the center is just to say that we're all presenting from our homes still today so we're having to rely on our home or bandwidth signal. So if there are any slight glitches, please do bear with us.
So for quick overview, I will soon pass you over to Gabriele who will cover the impact of data and data quality and why it's so vital for every business to get it right. After looking at some of their major pitfalls, we'll address why tag governance can really help you trust your data and to make database decisions with competence. Mike will then take you through what is tag governance with a quick little demo at the end. And we'll finish off with a question and answer session before summarizing the key takeaway. So without further ado, I'll pass you straight over to Gabriele.
Good morning everyone. I'm Gabriele Sibani and I currently work as an Account Executive at BitBang and I normally work as a designer around strategies or digital marketing projects such as an analytics one and I've been a long time designing measurement architecture. So just a few words about BitBang. We have almost 20 years of experience in enterprise data-driven analysis and projects, and our mission is to help organizations by leveraging their data for reaching their business objectives. And we normally provide our experience and our best practices both on technical and on the strategic side. And we make that by partnering with best technologies as ObservePoint and we make that a scale.
And so since we are a data-driven company we've always been concerned in data quality matters and as we want to help organizations in getting outstanding business results, we don't only need data towards on them, but we have to consider that we have to find that leverage on the good ones because we know that this makes a difference. So probably if you're here, you're already concerned about the negative impact of a poor data quality of the, or a complete lack of a data quality strategy. Market and researchers agree in estimating these big numbers for the last years. And last trends confirm an ever-increasing negative impact on business results. But not only the market but even executives and managers with companies seem to be concerned by these bad quality topics.
As you can see our recent surveys, the researchers depict a scary landscape, where the decision makers themselves, don't completely trust the data they are leveraging for their strategic decisions and if they don't trust their own data, why are clients supposed to trust their company? So fortunately, it seeks the wide known matter called data quality management to deal with these issues. These processes and activities should keep data quality high through the whole data supply and the decision chain from literature, we know that data quality is insured in two main categories or phases. Accuracy and completeness regarding the semantics or checking the data is correct and complete. Obviously data needs to be relevant, otherwise it's useless and it's interesting for what we are going to see in a bit, and we need validity as well. It is concerning how we collect data rather than what we are going to collect. And it must be time-relevant for sure as well or it can be even dangerous to use. And last but not least, obviously it has to be consistent to the context and to other data. It seems a big effort but we know that it's very worth it. Yeah, some of the results we can get from that, you know that we can get better decisions, we could better at audience targeting and we could be even more effective on content and marketing activities. In the end, we are improving that relationship with that customer if we have data to work on.
I think we just lost Gabriele. He's going to join us, in a couple of minutes. Please do bear with us.
I think that's the risk of working everybody under lock down or working from home. There's always, you know, obviously in an office you've got dedicated business quality sort of landlines. Whereas I think not this decade, I don't think everyone even has the landline in the hub. So they, because we all focus and depend on our mobiles so much.
Yeah, I agree. Mike, do you want to explain a little bit about yourself whilst we wait for Gabriele?
Sure. Yeah, why not? So hi everybody. My name is Mike Fong. I'm a Senior consultant and Solutions Engineer here at ObservePoint. I guess what I do at ObservePoint is I really understand the market. I talk to our customers and really anybody who has a stake in the market. We learn to understand the problems and you know, essentially bring those, you know, that feedback together. Where is the market right now? What does the market need? And feed that back into our product team to allow us to build the products that will help solve this. And ObservePoint was originally founded to solve a data quality issue. So once Gabriele makes it back I'll be leading the second half of this webinar to describe how ObservePoint can help.
Do we have Gabriele back? Okay. I'll keep going. So ObservePoint was actually founded by John Pestana who is one of the co-founders of Omniture, which now is known as the Adobe Analytics product. So obviously, it's got a strong heritage in the analytics space. And John's actually one of his self-proclaimed nicknames is the godfather of analytics. I haven't had this verified by anybody else, but I think to have someone who is even, you know, who can even lay claim to that is a, it's a very strong, so I think Gabriele has returned.
Yes, sorry. I've network issues and all the problems of the world at this moment. So sorry, just explained himself. Okay. Thanks mate.
We were talking about, I was about to introduce some common issues we find normally that are all related to a lack of strategy and a lack of joint and shared plan in order to deal with that fully to project. So normally the main fact is that it's very difficult to estimate the business impact of data quality, lack or the value of a kind of improvement program. And so the business is difficult in finding money and effort to work on that according to the project as well. So we have a way to deal with those facts and we start, we want to start from the foundation. So we normally propose to our clients and prospects to start to develop a base. So to the data collection phase we know from research is that, and we, everybody knows that, gathering data and ETL spaces, normalization ones get a lot of effort and money.
And so we want to start and we want to avoid all the possible phases in this kind of war. And so we want to start sharing the quality of data starting from the data collection because normalization effort increases with that volume, the storage space costs a lot, validating before collecting, enabling real time automation. So if we want to leverage some automatic triggers and in general validating as source, enable agile or integration and break silos. And so we want to move in a direction where we are going to validate before collecting or as we are collecting and now with GDPR and other regulations where even crawling a narrow ridge because on one hand we have all our aims and objectives of knowledge, towards the clients. We want to profile, to customize to be flexible in our handling of their experiences. And on the other end we have all the regulations or internal policies and so on. But this is a very good challenge for us because there is the competition. So if we are good in all the way we're going to approach the way we collect data, maybe we could make the difference between our competitors. And so our proposition and the data policy is not only based on are we collecting and validating, but when and if we are collecting. If we start collecting only the needed, we can get better data and have even more time to validate them and gather and use and get a lot of wonderful insights. And so if we consider a coin, if we are going to face our desire to measure and profile and on the other end, on the other face we want, we have to declare and comply. But there's only a kind of trade union for these and the governance.
So we want to propose them to promote the kind of governance, so frameworks, so I can have policy for ensuring the stance and the codes are firing when desired, where all the properties or variables are set properly, and complying with all the regulations and policies that our company or our relationship with the clients requires.
And why should we do that? Because this could help in doing all the things that we already said, but we have to detect even performance issues. So if our tasks, our measurement codes, increase risk of problems in the experience, we want to provide better experience, we want to improve our ROI and we want to save costs. And then as I said, we want to comply and standardize processes. All these actions, concurring, creating, and improving a stronger source with our clients. And on the other end internally trustful as the foundation. So in the end what we seeing is this policy is your only strategy because considering a shorter amount of data dealing with growing in this kind of scale, if we track it, collect only what is really useful and we are going to validate what is necessary to get the best insights, we are going to act and win on a shorter set of data but only on the ones that we really want.
On the other end we have a first phase of design strategy we have to automate and scale up when we validate. We have to think omni channel and customer centric when we are going to integrate with other data and on the other end we have to focus on KPI and that's a greater result in order to have an implementation and going to keep on measuring all these results. This is in a kind of increasing value and commitment scale. So starting from a larger set of data with a strategy we are going to ask if and when going again starting with another phase of measuring because we are not ever finished measuring. And then as the last point before passing the word to my friend Mike, our suggestion is to do it at scale. So all these activities are wonderful. I have to say that they require a lot of time and effort but they are really worth it and they are going to provide outstanding results if they are doing that at scale.
So if we are going to find a kind of tool or platform like ObservePoint as we are going to see could be that integration with principal TMS and MarTech solution that can handle automatic auditing that could give you higher data probably or hopefully and that can handle customer journeys. So avoiding specific customer journeys in order to validate and assure them and provide with some reports and let you share business insights and on the other end support, even app and mobile, that is a channel, even with an ever increasing efficiency and interest on that. With all these capabilities, we could build a really data quality management program and a tag governance framework that can provide you the results you expect for your business. So thank you for your attention right now. Mike, it's up to you. Thank you for keeping the audience before.
Great. Well thank you very much Gabriele. It was no problem at all.
Thank you Gabriele for really I guess outlining, you know, very quickly what the problems with data quality are, you know, and also, you know, one of some of your earliest slides outlining the value that can be gained by solving those data quality issues. So for those of you who've joined late, I'll just introduce myself again. So my name is Mike Fong, senior consultant and solutions engineer at ObservePoint. ObservePoint is a SAS software platform and exists to work with our customers, our technology and our consulting team, to help our customers really solve those data qualities that Gabriele really outlined. And we love working with BitBang because they understand the value of the solution so well. They know the value of the data quality problem. And, so we work very closely with them, in Europe.
Okay. So I'm going to talk about firstly the tag governance maturity scale, kind of everybody who's listening in on this web should be able to sort of, you know, anecdotally or just instinctively place themselves in the organization on the scale. So starting from the left, which is mostly reactive and on the right, which is very strategic and forward thinking. Organizations are all trying to move from left to right and working with organizations like bit ban and ObservePoint and we can help you move towards that goal. So, of course the reactive state is very much the worst case situation where data quality issues pop up in live. And then of course you are, you have to react by solving, solving the issues.
Now in the, I guess 10 years ago, 15 years ago, that would have been a very, very difficult task because you would have had to ask your IT team to, to change attack, maybe Adobe Analytics or Google analytics. And often analytics would fall to the bottom of the pile. In modern times of course, tag management systems allow digital analysts and data architects to do that much more quickly, allowing us a lot of agility and actually, and that agility is a double edged sword because there are, there are processes and safety checks in place. The IP teams are very formally adhered to, whereas discipline lists are much more agile and, we haven't got in so many places those formalities of a peer review and checking. So that's the ultimate reactive state. That next circle ad hoc problem solving that's kind of goes hand in hand with the sort of third one left manual spot checking. And this is our thing where most organizations are right now. That is, they know that data quality is an issue. They know that they're resource constrained as well. So that means the, the most logical thing is just to manually check the situations where, you know, problems have happened before or to do ad hoc problem solving where, you know, there's a problem right now, but we have to actually troubleshoot and understand it before we can get to a solution. conversion puff monitoring. That's the next step up. and this is as Gabrielle mentioned the more, the more the further you go, you realize that spot checking isn't good enough actually, you know, checking one page or one interaction here because it's actually a customer journey that you want to check, not just a single page load or a single interaction. And so our whole, I guess, holistic land on the homepage through to, as far as you can go, a conversion point, that allows you to check that, much more holistically.
But then it comes to the problem of actual ingesting that data. If you imagine a typical conversion funnel if you were to manually check that Londoner homepage, manually check that; add to cart, manually check that; go through the checkout page, manually check that; go through to our order confirmation page for a w order, manually check that. It's getting a lot of data to manually check and to keep all that in a purchase brain or even the document on an Excel spreadsheet. And so that's where technology really comes in. So technologies like ObservePoint will help move you from that center point through to the right, the strategic side. So automated tracking validation, that's what we provide. Process and workflow integration. So ObservePoint and technologies like ours will allow customers to integrate what the JIRA, the Microsoft teams and allow you know, be digital analytics, troubleshooting process to start working and living in your development teams QA process.
Ultimately what needs to happen in our industry is for quality assurance of digital to catch up with quality assurance of product development. If you think how much revenue or sorry, how much money is spent on product development and then how much money is spent on the QA for such product development. And then think how much money is spent in marketing and MarTech and think how much money is spent in MarTech QA, those ratios are completely different. I would say the organizations don't even think about spending any money on MarTech QA, but they would always put aside a pot of money for product and development work QA. So that's just a paradigm where the market and marketing teams need to catch up. So at that point you've got the technology in place. You can start thinking strategically and that's where all of our clients are working.
So let's move on. Oh, I was in PowerPoint there, so I clicked the arrow on my keyboard, but it didn't quite work. okay. so I guess the question is do you need tag governance, or where are you on your take of in scale? So if you can answer these questions confidently. So are there misconfigured tags on your website? So this refers back to Gabriele's presentation earlier. So misconfigured takes these attacks which are present, but is the data inside them perfect. For Adobe analytics users, are all your props less than a hundred bytes? For example, for Google analytics, do you have all the data you'd need if you're using GA free and you've only got five custom dimensions? So you know, misconfigured tags really refers to tags which are not doing the job and your digital strategy needs them to do correctly.
Are there unauthorized tags on your website? So this is a question of well, it's two-fold really. One is, are there old tags that your website previously needed but no longer needed and you're potentially no longer partnering with that vendor? So we will call those legacy tags. And then the second half of this is piggybacking tags. Are there third party tags which are loaded onto your website by third parties which may be slowing things down for you or may be taking the data from your website. So these are very important things to note. So unauthorized tags, they're probably providing you no value, but they are definitely at the very least slowing down your page and taking potentially sensitive data. So if we think back to Gabriele's ridge analogy, and it's very much on the dangerous side of the ridge. You're off the ridge and you're falling down the canyon on the left, that is not providing any value but providing a risk.
Third point, are there duplicate tags on your site. So this is a bit less relevant than it used to be in the olden days. the duplicate tags used to cost you twice as much. They still do but they're used to double count your data. A lot of technologies now automatically have a D duplication sort of aspects to their technologies. So your data is no longer double counted. But if you send two page view tags from your homepage well that is a lot of, that's a lot of tags you're sending, and that is costing you money if your vendor is charging you on a CPM basis. So it's always good to know, that you're minimizing those. Yup. And it goes back again to Gabriele's point, track only what you need and make sure it's done well. So duplicate tagging isn't, it's very much not necessary.
And then either missing tags on your website. That's the kind of flip side of, of this whole type presence angle. Missing tags simply are important pages not tagged. And, in my career as a digital analyst, there are a number of times when we stumble across a page that actually doesn't have a tag. And also, sadly, it's impossible to know that you have a missing tag in your reporting system because it's not in the reporting system. How can you report on something which isn't there? So that's why you need, you do need an outside third party running and checking things at the point of creation of the data, that is the tag being sent from your customer's browsers that will help you provide the quality on those.
So ObservePoint has come up with what we like to call the tag governance process. It's a circle or a hexagon, but in essence, we see there are six kinds of stages to this process. And it's, it's definitely a cycle. It's not just a, you go for all six and you're healthy. The type of this process starts with planning. So like all good things they start with planning. And, we like to say that any plan must start with a group of people who actually care about the plan. We've run to be called this the tag governance council, but in essence, it is a group of leaders or stakeholders in your organization who have the position to actually execute on change, to decide on the strategy and also who actually have a vested interest in the quality of the data. So we're talking about CMOs, CDOs, you have people who need a return on investment from that marketing spend or a CDO, someone who needs to know that the data being sent from the website about their customers to third parties is secure. So we're talking about walking the ridge again.
So what does this tag governance council actually do? Well, firstly they should actually design and document what they want the data to say later. So they should say, well, we need to tag for analytics, we've chosen Bizible as our vendor, we need to tag for A/B testing, we've chosen this as our vendor. And once you've naturally chosen your vendors and you keep on top of that, then actually following through, but the latest stages of the cycle become very easy.
Second point, strategize. How will you test? If you know that you are an eCommerce website, you know that you want to run an ObservePoint audit, for example, on all your page loads and all your products and you want to simulate journey testing on your most important conversion journeys. And you want to prioritize those so that your revenue driving journeys are a higher priority test plan, more, a higher priority maintained, then your less important ones, like for example, an email signup or a help section journey for example. So that's where we're moving into strategizing.
So moving on to comply. This is kind of part and parcel of the planning stages. Well comply is how you plan to comply with the regulations that are in place in your jurisdiction. So of course you can't just do your testing, you have to document that you're actually doing a testing so that you could potentially show any authorities what you've done. So what are the features on ObservePoint is our one year rolling data warehouse. So you always got one year's worth of data to refer back to. So if you come across a bug at some later point in time, you can look back and see actually when was that bug introduced and how do we need to improve our tests to make sure that that bug isn't missed again. Okay, well you talk about regulations, stop talking about GDPR. So GDPR is the European general data protection regulations. There are equivalent things popping up around the world. California. I noticed some coming from Asia as well. So again, don't just talk about it. ObservePoint can actually support you in executing on that. So ObservePoint has reports to tell you where your data is going up to this point. Can scan and check all your third party requests for PII, and many things. So I won't keep going on about that.
Okay. Deploy. This is the phase which actually is very high risk phase. So with all the good intentions in the world. You can plan and document, but if you actually do not execute because of time constraints and this is, that would be a shame to be honest, because a lot of planning going in, documentation, and then for your analysts to simply say, we haven't got time to do the testing. I mean, it's a false economy really. Because if you don't put in place the testing, then the bugs come back to haunt you. And it can take you know, up to a hundred X more resources at a time. and these are all costs to your business. One of our clients has done an anecdotal study and they've said that they estimate that a bug caught in the QA process is 100 times cheaper to resolve than a bug caught on the live website. Once you factor in the amount of time it takes for your analytics team to be confused by the data anomalous data, by the time it takes sort of analysts to figure out that it's a bug by the time they've communicated to the product team or race or ticket in JIRA. And then by the time the development team or the tag management team have to switch tasks and mentally to go back to fixing that, not to mention the lost customer data and the lost opportunity of not being able to activate a solid data set a hundred X. So make sure you actually do deploy.
Okay. Part of that is testing in a low environment. So, as I said this analogy earlier with QA teams in product development, you don't just launch a product and then and then see if it works in the field. You test in your laboratory, you test your car in the laboratory, on the conveyor belt, you test your product in the staging environment. The same thing applies to digital analytics and MarTech. You should be testing in your pre production environments and ObservePoint can be configured to run in the pre-live environments.
Okay? So then QA, so QA, this is that phase where you're actually looking at everything you have. And it's not just, you know, look at the report and say, okay, everything's fine. It's how you get reports and how you systematically pull through the reports to look for bugs. QA is also how you prioritize things. So for example, if a bug has happened before then obviously it's likely or it's much more likely to happen again. So this is how you kind of prioritize. Okay. And then this forms a part of the circle. So integrating your plan back into the deployment phase. So this step, so that it's sort of learning.
Okay. And then moving on to the fifth, part of the cycle I guess, and bear in mind that all of these are probably happening in tandem at the same time. Different parts of your team are actually, sort of working on different parts of the cycle at any one time. So validate is really about looking for issues in your current implementation and then also tweaking the QA process while you fix it. So as I said before, if you never have a bug happen, in the past, then you should add that to your, to your next testing plan. Whereas, if you know that something is highly, highly unlikely, then again, you can deprioritize that. So you eventually end up with this checklist of unit tests and regression tests and ObservePoint automation features allow you to, to run all these, very quickly on an ad hoc basis or on a regular basis.
The ad hoc basis really fits into the earliest slide, so itself to the bottom right of that cycle. Those are the ad hoc. You do your testing as and when appropriate. For example, at the end of a sprint or when a bug is found, but the monitor. So that top left highlighted bubble there. Monitor is about running things as background checks on your life environment. So ObservePoint has the ability to run ongoing real-time monitoring in your live environment as well. So just gives you a double whammy. You have your quality assurance in the staging and you have your quality control in the live and this is a constant feedback loop. And throughout that process you should be improving your tests, improving your processes as well. The tests simply find the problem, how your organization reacts to that diagnosis is the true test of data quality management. If something happens, you put in place a root cause analysis. So in RCA to put in place the procedures to ensure it never happens again. It might be adding a certain process to your development team. It might be limiting the number of people who can publish in your tag management system.
Either way. What you always hope for in quality assurance, regardless of digital marketing or development is you want a bug which has happened in the past. It never happened again. And so over time you're able to reduce the number of recurrences, which means you're able to reserve your resources for product development rather than bug tackling and we call that technical debt. Once you're in a strong place of tech debt, then your team actually feels your dev team and you're taking, you feel very free to charge ahead with the projects that need to be done. Technical debt actually is a huge killer in many IT projects as I'm sure you all know.
Okay. So the point of automation, and Gabriele touched on this as well, is that QA staging and the monitoring stage can be automated, as I said, ObservePoint has a strong set of APIs which allows you to trigger and run these tests as and when needed. So you might be using a continuous integration platform. For example, Jenkins or I don't know if many other examples actually, but actually using an API, you can hook up to the ObservePoint platform and you can automatically run the relevant tests as in when you are running certain scripts in your CI platform. We do also have many automation integrations with tag management systems. So this allows us to automatically run a relevant ObservePoint test as in when you publish into your tag management development environment or your tag management live environment. And so taking the brainwork out of the actual execution of testing and putting the brain work into the insight and B, the design for testing, will allow customers to gain massive, massive value.