Judah Phillips, Vizadata - Machine Learning for Marketers: Predicting Tag Compliance

October 23, 2018

Predicting Tag Compliance using Machine Learning

By Judah Phillips of Vizadata

Slide 1:

Thanks Brian and hello everyone. Hope you’ve enjoyed the sessions you’ve been to so far today and thanks for coming to my session. I wanted to talk to you today about the cutting edge. I wanted to talk to you about machine learning for marketers, and I want to talk specifically about predicting tag compliance and other stuff using ObservePoint data.

Slide 2:

But first, I want to talk to you about the challenge with digital analytics. Many of you know me from the world of web analytics and digital analytics, and that world tells you that you should build KPIs and dashboards and visualizations. But there’s a whole host of things that occur before, during, and after the finding a measurement plan, building your metrics and KPIs, and producing dashboards and visualizations. The politics, right? And it’s the accuracy of the data. And the reality is that in digital analytics, we’ve got a good handle on a lot of this stuff. Right? Of course many of you have read my friend Avinash’s blog, and his comments about the highest paid person’s opinion. And now however you feel about that, we all know accuracy is super important. But descriptive accuracy is only a part of the challenge of digital analytics.


Slide 3:


The real opportunity I think exists beyond the traditional confines of digital analytics. Ya know, for the most part, my experience over the last 18 or 19 years with digital analytics, it’s mostly been descriptive and diagnostic. What you’re seeing here in this quadrant diagram is some research from Gartner probably done late last year, produced earlier this year. It indicates that seven out of ten companies or so are doing descriptive analytics, another three out of ten are doing diagnostic analytics. This is the KPI of dashboarding visualization strategy where you’re attempting to identify some insights, tell people about it, hope they listen, hope they take action. Right? Kind of the dilbert cartoon I just showed you previously.


But the reality is, in 2018, you know, that’s only going to get you so far. Businesses really want to be on the other side. They want to be predicting and prescribing things. But only very few companies are really doing that. Despite all the hype that we hear about data science and big data and machine learning and AI. There are definitely companies that are exploring, they’re doing it, but according to Gartner we’re looking at maybe, what? One out of every ten companies are doing predictive. One out of every one hundred are doing prescriptive analytics. Meanwhile to support whatever types of analysis are being done, ya know, almost 80% of analysts time is spent finding, preparing, curating, getting data governed and ready for use in analysis. Data prep. Data wrangling. Right? As opposed to data analysis and interaction. So, I noticed this and I realized that there’s a huge challenge here because digital analytics is mostly on one side but the business wants to be mostly on the other side.

Slide 4:

And so I actually created a company called VisiData to solve for this problem, and we’ve applied it to a lot of different use cases. One of which is thinking about what if we could predict about tags? What if we could predict about tags? What could we predict? Could we predict what pages are most likely to have tagging issues? Could we predict maybe what pages may violate our privacy and compliance initiatives? Could we predict what tags might have excessive load times? Could we predict what tags are most likely to break or go bad? And can we even use the tag data to predict what channels or tags are most important for conversion during the customer journey?


And here’s the thing. I know you can do all of this stuff because I have done all of this stuff. And I will show you how to do some of this stuff today in our presentation.

Slide 5:


And how we’re going to do this stuff is we’re going to use supervised machine learning algorithms. So, we’re going to talk a little bit about what supervised machine learning is versus other types of machine learning and how it fits into AI. But first I want to talk about some model classes.


So, the first thing I want to talk about is classification. So, this is a concept, right? The idea of classification is a type of class of models. So classification, there’s two types in machine learning. There’s binary classification and multi-class classification. I’m sure many of you have heard of these and maybe some of you haven’t. But what binary classification is, is it’s the identification of the probabilities of an entity to be in two classes. So, for example, if I wanted to predict if a lead will convert, the classes are the lead will convert or they won’t. Those are the two classes. If I want to predict that a customer is going to churn, the classes are they’re going to churn or they’re not. If I want to predict if a customer is going to return an item, the two classes are the customer is going to return or they’re not going to be a return. And if I want to predict whether a tag is going to break, I can predict classes of breaking or not breaking.


So that’s really binary classification in a nutshell. Very simply put for business users, right? It’s the assessment of the probability of a record, could be a customer, could be a tag, could be any entity, to be in two classes. Probability.


Now, multi-class classification is another type of supervised machine learning analysis that you can do. We are predicting the probability of an entity or record to be in three or more classes. So, think for example, if I wanted to be able to predict, will a customer have a high lifetime value, a medium lifetime value, or a low lifetime value? Well that’s a multi-class classification problem. Could I use our past data in the relationships in it to learn from, and then can I predict which class that customer is likely to be in in the future? Will they be a high value, a medium value, or a low value customer?


Think of another example. Offers. So let’s say I have ten different offers I can send to my customer base, and I know how previous customers have responded to those offers. Well, I could build a model that would predict the probability of a particular customer to respond in a particular way to those ten offers. I would literally get the probability based on 100% of the person responding to all of those offers, and the one with the highest probability of a response would be the one that I would send. Right?


Another example would be with multi-class to be able to predict which personalization. I most likely would be respondent to a particular customer. So if I had five or six different personalization recipes, and I had historic data about the customers and their attributes and how they responded positively or negatively to these personalizations, I could build a model that could then allow me to predict which personalization to provide to a particular customer. That’s multi-class.


So, classification is something that you can do with supervised machine learning. You can predict the probabilities of an entity like a tag that can either break or not break. Or in a multi-class maybe you predicted to have a high, medium, or low page latency. Right? These are some of the model types that are applicable, or model classes I should say, that are applicable to ObservePoint’s data.

Slide 6:


Additionally, many of us have heard of regression, right? And regression is your typical y=mx+b, right? We’re using a set of independent variables or explanatory variables to predict a dependent variable or response variable. And you have error function as well, and this allows us to be able to predict the continuous variables next period value. So, let’s say I’m in a basic case for regression, and I want to predict sales next month. And I can use historic data to do that. Or maybe I want to predict page load time. Right? I could do that with regression. So, again, regression, often times considered a linear approach, does not necessarily have to be. And there are some supervised machine learning algorithms for doing regression. And what we’re going to do in this presentation today is explore the first type of model class, binary classification, to ObservePoint’s data.


Slide 7:


Okay, so now that we know what classification and regression is, I want to talk a little bit about what supervised machine learning model types you can apply to these model classes.


So, for prediction, there are statistical methods and symbolic methods. What we’re going to go over today are the application of a couple different statistical and a couple different symbolic methods to ObservePoint’s data.


We’re certainly going to use a regression model. A regression model is great for estimating tasks that require linear or quadratic relationships to be understood. The problem with regressions is that when there are missing values or outliers or redundant or harmful data that’s not causative or noise, they might not be the best.


We’re also going to take a look at neural networks. What are called artificial neural networks. Deep learners, which is exactly the type we’re going to look at today, fall into this class of models. A&N from neural network is a very powerful mathematical model suitable for many different types of predictive tasks. There’s different types of neural networks, we’re going to be looking at multi layer perceptrons today. These often require numeric attributes, and they work really well with missing values that are also robust outliers. They’re great to be able to run your data through to get a different view of that data than say a regression model.


We’re also going to look at symbolic methods. So we’re going to look at decision trees. What decision trees are, they essentially construct predictive models by iterations. They kind of have this divide and conquer scheme hierarchical decisions. You’ve probably heard of CART models, C4.5 models, these are good examples of different decision trees. Distributor random forest, extremely random tree models. They’re closely related to rule learning methods, but they don’t suffer from some of the same disadvantages as rule learning methods.


We’re also going to look at some stuff that’s not on here. Some optimization methods. So, for example we’re going to look at gradient boosting today which is a machine learning technique for regression and classification problems. What it does, is it produces a prediction model from a whole bunch of different weak prediction models which are typically decision trees. Essentially it builds the model in a stagewise fashion, but it allows for some optimization on the lost function.

So we’re going to take a look at gradient boosting. We’re also going to take a look at what we do, which is pretty cool at Vizadata, on top of all these models which is we build stacked ensemble models which are a way that we can fold together these different model types to leak out an even better model that has higher predictive power.


So, for those data scientists out there, I hope that made some sense, and for those of you who aren’t data scientists maybe that was some heightened diction.


We’re going to dive into these in the concepts of a business user so let’s get back to the earth now.


Slide 8:


In order to use supervised machine learning, any of those model types, whether we’re talking about a GBM or a distributor end on forest or a logistic regression or neural network, you need training data. This is key. This is machine learning 101 for supervised machine learning. Supervised machine learning requires data that shows historic relationships. But what you want to predict are both positive and negative. So a training file of this historic data that is used for learning. Typically you will have, and I’ll show you some examples of this in the next slide, you’ll have each row as an individual record. You’ll have columns that are called features in the data, they’re attributes of the record. You’ll have a column that contains a dependent variable. You'll have a whole bunch of columns that contain your independent variables. The idea being that you can predict, you can learn how to predict and build a model from these historic relationships.


What I’ve been doing in my software, is we’ve been examining the data relationships in this pile like any supervised machine learning algorithm would do when building models.


Slide 9:


So here’s an example. This is important. An example data because you’ve got data at ObservePoint for training right now. There’s training data for binary classification. So take a look at this line, I know it’s a little small on the screen. What you’re going to see is a customer ID. So, each row is a customer record with attributes of every customer. We have gender, whether they’re a senior citizen, whether they’re a partner, whether they have dependents. How long they have been a customer? Do they have phone service? Do they have multiple lines? Do they have internet service? Do they have online security? You’ve probably guessed it, this is a telecom data set. Do they have other products or service features like online backup? Device protection? What type of contract are they on? How do they pay? What do they pay with? How many charges do they have or what’s their total charges in their lifetime? What’s their total monthly charges?


Keep in mind, this is training data. So let’s say this was customer data from 2017, 2016, and 2015. Well you know what we would know about those customers? We would know whether they churned or not. So our dependent variable is highlighted here in green. It’s a zero or a one. Not that similar to a logistic regression, but we’re using it as a label where zero indicated the customer didn’t churn and one indicates the customer churned in the past. We’re going to use this historic data to build predictive model, and then we’re going to evaluate the results and look at the predictions on ObservePoint data soon.


Slide 10:


I also want to show you what a training data for multiclass classification looks like. Here’s an example. Again, this isn’t ObservePoint data, but it’s illustrative. We do a lot with customer data at Vizadata so again, this is customer record. We have a gender, we have an income, a location, a marital status, all sorts of information about how many months they have been participating in particular programs, recency, frequency, things like that. What type of member they are, options, coupon values, how much they spend, where they live, total lifetime value, whether they last responded to an email, and then what membership type are they. Are they basic, extended, or premium? And what we want to use this data set for is to predict which membership type to offer our new customers. So we can learn from this data to build a model that when we have new customer data, we can say what’s the probability of that customer to respond as a basic member? As an extended member? As a premium member? So again, this is a multiclass classification.


Slide 11:


And then finally I wanted to show you guys a training data sample for regression. And remember training data is necessary for any supervised machine learning algorithm. AI. That’s what I’m talking about. Uses training data. Ever see the autonomous vehicles that are driving around in constrained areas. They’re crashing into things, they’re taught how not to crash. They’re taught what to recognize. Here’s the deal. Even for more business focused use cases, you need training data. And again this is training data for a file for regression. Again, it’s historic data. This just happens to be Facebook data about a particular Facebook page. We have all of this information about the posts. I have historic data about interactions each post had on this page, and that’s my dependent variable. So I want to build the model that’s going to predict how many interactions we’re going to have on a particular post on Facebook in the future.


So again, regression predicting the next periods value for a continuous variable. Very powerful for a lot of marketing use cases. Load time, right?


Slide 12:


Here’s an example training file for regression. Now here’s the reason I told you guys all of this. How I oriented you about the challenge with digital analytics. How I walked you through...What is binary classification? What is multiclass classification? What is regression? What are the model types that you could apply to do these supervised? Machine learning analysis. Then what’s a training file? Which is what you’re going to need to get started assuming you have the data science skills to build these things, and we’ll get to that in a second.


Now, assuming all of that, right? I know, a lot of assumptions. Here’s a sample of data that comes out of ObservePoint. I literally went into my ObservePoint implementation, and I exported a bunch of files and look what I got. I’ve got such great stuff. I’ve got status codes and tag names and versions and whether things are duplicated or when they start and stop. I’ve got different tag names, different tag values that I can pass, I get errors, I get byte length, I get load time. So much of this stuff to build predictive models around. It’s pretty awesome. And you know what? This is going to look different for you. Because you’re ObservePoint is going to be different. You’re going to have different evar and s-props and UTM this and UTM that. You know, MC underscore whatever all over the place. But you can use the data of ObservePoint to build similar analyses using supervised machine learning which is what we’re going to do right now.


So, let’s bring it all together guys in the last half of this presentation.


Slide 13:


First, I’m going to use all of this training data to build the best possible model using Vizadata Seer. This is a product that I’ve created over the last couple years to do this type of analysis without requiring a data scientist. Start uploading all of your training data, upload the data you want to score on, see how long you want the model to run, and click next and you get the results I’m about to show you here.


Slide 14:


Example number 1. Let’s predict which pages on your site will have compliance issues in the future, being truthful to my title.


Slide 15:


So, what’s compliance first of all? Maybe this is new to some folks. Compliance is when you follow the rules, and there are some big rules to follow these days. And the consequences of following these or not following these rules are huge.


So you probably heard of GDPR right? The EU privacy regulations. This defines PII and sensitive information. It requires opting for collection. It has an extra territorial effect. If you want to do business with me, you better be conforming to these standards or your company will face very significant fines. For all of you in the EU listening to this, today you probably heard about how Google just had to pay a 5 billion dollar fine and as a result, all of you will be paid $40 for your Google apps going forward and possibly more. So, you know, not complying has a consumer and a business impact.


In America, what’s coming down the road everybody? CCPA. The California privacy act which is different but similar in tones of the GDPR where you’ve got definitions of PII and other concepts related to your browser and purchase and site and app behavior. There’s an extraterritorial effect on a new business in California, you’ve got to follow this. It allows for an opt-out instead of an opt-in, but you’ve got the potential fines in the millions if you don’t conform.


So there’s these two major initiatives globally driving, nationally driving compliance. But there’s also a whole bunch of other things that go on in business, right? There’s fraud. There’s data security. There’s governance initiatives. There’s SLA requirement. There’s internal benchmarks for things. And there’s even page speed concerns. So all of these target states can be dependent variables in a supervised machine learning model.


Slide 16:


Maybe you want to use your ObservePoint data in Vizadata to do something like this. Now check this out. This is literally an export out of ObservePoint. I’ve had to reduce the number of columns in here just to fit it on the page, but what you’re looking at is urls, JavaScript, status code, the byte length, the load time, position, bytes, tag start, tag stop, tag load, tag resume, and then the different variable names in past within your SDR or your tagging plan. And then what we’ve labeled. We’ve labeled this as compliant or not. Has this page had a compliance issue in the past or not? You can those that are zeros and the ones.


So, I took this training file, and I ran it through the Vizadata sphere which essentially automates the supervised machine learning process to produce results.


Slide 17:


Here is the model that came back. Now when I saw that we had a .999 AUC, which is nearly a perfect model, I almost started to think to myself, well maybe this isn’t correct. How could the model be so good? Then I actually went back to some of my engineers and some of the folks I work with and said look at this, and you know what? We did determine that this is absolutely correct. What it is, is the ObservePoint data is very good for Gradient Boosting Machine for symbolic method. We actually have a model that is 100% more accurate than just randomly guessing which tags are going to be with both pages rather than be compliant or not.


So now we’ve built a model that we know is extremely accurate, and you can see other summary statistics. The area under the curve as I mentioned would normally be .5 if it was random chance so we’re like twice that, that’s pretty good. And you can see some other metrics like the root mean square error, the mean squared error, the log loss, the mean class per error. And what we do is once we load the training data, we automatically build many different instances of this data doing cross validation on all the models we build, the surface, the best possible model of the various types of models. You can see the gradient boosting machine was the best possible model of the different models we build. We use that for your predictions. This took me five minutes to build when I exported the data from ObservePoint. Pretty powerful.


Slide 18:

Now, one of the things the model producing process for a supervised machine learning algorithm produces is something called feature importance or variable of importance. I think this is really cool when it comes to the model elaboration and understanding process. Essentially what variable of importance is, is how important is that column of data to what you’re trying to predict?


So, if we just go back to the training data, variable importance is telling me which ones of these columns are most important relative to each other for predicting compliance? Is tag v2 more important than tag load time? Is the byte length more important than tag bw? How would a human know, how could a human figure this out? Well, they can’t. This is why these algorithms are so powerful.


What we’re seeing is that the biggest driver of the compliance issue in this real data is tag c36. It’s also tag g and apparently tag load time has an issue too. And some other tags. Some c tags there. So, literally within five minutes after I uploaded my data from ObservePoint to Vizadata, I built the best possible model for predicting tag compliance. It was 99 % accurate, and then I used that model, the gradient boosting features, to understand which tags are most predictive of tag compliance issues. But what would I do with this? I would score going forward my pages and all their components, right? To predict which ones are most likely to have tag compliance issues. Then i would look at these features and understand which tags are most likely to be violators. I would be all over tag c36 and tag g. So, an interesting way to start to explore your ObservePoint data in new ways.


Slide 19:


Now, once I have new tag data maybe what I want to do next is predict the likelihood and the future of those pages to have tag issues. You can see that what a binary classifier produces for each URL in this case is a crispy classification label. That’s like the zero or the one. A one is saying this page is going to have a problem, zero is saying this page isn’t. Then you’ve got a probability of the problem or not for each page. So, in the case of row 5, whatever that URL is, I’ve masked it, 99% likely not have a problem. Whereas in line 0, for that URL, we are 87.4% not likely to have a problem.


A great way to begin to get prediction into your tag compliance plans and then also to be able to be prescriptive about which tags and which pages you approach. Pretty cool, huh?


Slide 20:


And now let’s do example number 2. This is a more technical one. I remember when I used to run analytics at Monster Worldwide, my team would get dinged all the time because we would put the tags on the page and sometimes the tags would go down. This is predating any auditing, how I ran into Rob at ObservePoint back in these days. Oftentimes, load times were problematic. Especially days before async tags when the rest of the experience could be blocked sale tags. So, let’s think of how we can use Vizadata to supervise the machine learning and ObservePoint to predict which tags have excessive load times.


Slide 21:


Now, you just picked load time. I’m talking about bad tags. However you define bad. If you want a long load time to be your dependent variable for your analysis, great. If you want to use invalid tags, there’s an invalid tag in ObservePoint. Maybe you want to split your distribution on page latency. Maybe it’s the page, the tag size. Maybe it’s how slow the tag is to start. Maybe it’s how slow it is to end. Maybe it’s the values in the tag. Maybe it’s the location of the tag on the page or geographically.


Now, however you want to define “bad”, you can when training data. It’s a label that you either put in or you derive as a feature from the data.


Slide 22:


So, in the context of ObservePoint’s data here’s another exported ObservePoint data. This time I’m taking a look at the tags on the site, their tag IDs, how many JavaScript errors they have, what tag account their associated with, what tag version they’re associated with, tag position, multiples, whether the tag is valid, the bytes, the started, the stopped. And I wanted to show you what I did here. I looked at tag load time, and I was able to predict which ones had a long load time. I used that as the zero and the one.


Slide 23:


What I did is I ran it through Vizadata, this model I let run a little bit longer. I think I let it run ten or fifteen minutes that way I could ensure that our deep learner would kick in. Notice that this data was less amenable to your general linear model. You see the deep learner kick in there. And again, we’ve got some pretty stellar performance from ObservePoint’s data leading me to believe that the ObservePoint data model is great for data science and is highly predictive for these types of classification challenges.


So, again, the gradient boosting machine kind of had the highest AUC. But it had a little bit of a lower, and the same AUC is the stacked ensemble model that we built from all the other models, but you can see the RMSE was a little bit lower and the log loss and the MSE so we use that data to predict.


Slide 24:


Before we predict, we want to see what variables were most important in predicting long load times. Now, you can do that with the model used to predict or you can look at the futures of the models that you built. The model that we used to predict the variable importance is telling us when the text started or stopped and the response time and the position are very important to load time. Then, obviously the deep learner actually down here, I’d like to direct your attention, is a little bit more useful in the sense that we’re actually giving the variable values.


What this is telling me is that this Akamai mPulse Pageload tag is one of the problematic tags that’s causing long load times. The same thing with Adobe DTM Bootstrap. What I’m also noticing is that long load times are occuring when there isn’t a value in ObservePoint for these values for this data. So when there isn’t a value for tag response, when there isn’t a value for tags stopped, it leads me to believe that on this sight there are some tags erroneously firing and not being recorded correctly or collected correctly. That likely is causing some long page load times.


Again, I get a model that I can use for prediction. We haven’t even predicted anything yet, but out of that model I get some great insights about which features are driving performance or in this case lack of performance.


Slide 25:


Then what I’ve done here, is I’ve taken new tag data, and I’ve applied that model to score the likelihood of these tags to have heavy long page time durations. The majority of the tags appear not to be problematic, but check this one out at the bottom. This BlueKai tag. Maybe that’s something that I want to investigate.


So, again, very quickly with the powerful partnership between Vizadata and ObservePoint, we can build a model using supervised machine learning from ObservePoint’s data, classify predictions in the context of your business goals. For example, privacy reliance and tag load time or other performance measures. And give you not only what’s causing the performance issues or not, but also the probabilities of those things to occur on either a tag or a page basis going forward into the future. Super cool.


Slide 26:


Now, the last example I have for you guys today is near and dear to my heart. It’s this idea of attribution. I’ve been doing attribution for a very long time, and I think we’ve come a long way with attribution. I think supervised machine learning in attribution is really a c change.


What I want to do with some ObservePoint data now is do attribution conversion prediction using the tags seen in the customer journey. Pretty cool stuff, let’s do it.


Slide 27:


Again, for those of you who may have not heard of attribution before, it’s a solid-use-case for classification. This is where we have a journey where a prospect or a customer lead is touched through different marketing campaigns, the path to purchase and then they do something with value. They buy something. They convert. Then the question is of course, how do we allocate credit to these different channels? That, to me, is great, credited at the level. What if you credited at the tag level in the channel? In the campaign? That’s super great, and that’s what we’re going to show you now.


So, how do we do attribution with supervised machine learning?


Slide 28:


So, this is essentially ObservePoint data. This is customer data, and this is tagged data. Notice how these are tags, right? I put in more than just marketing campaign tags in here just to illustrate a point for everybody. In theory, with the customer journey, you wouldn’t have things like Adobe Marketing Cloud or exposure to non-advertising tags.


But here you see, this is like a customer journey. So for this account they didn’t get exposed, false to Adobe Ad Cloud, Adobe Marketo, or AdRoll, or Advertising.com, or AOL, or AppNexus. They did get exposed to comScore, they did get exposed to DoubleClick, they did get exposed to Facebook. Yeah they did get exposed to Google Remarketing. They didn’t get exposed to Youtube. So you can see, these are the touches. The tag touches in the customer journey. And then the final thing we have is whether or not this user, this lead, this person, this customer, this ID, who had these touches converted, so we just label it. We know this. We know this from our CDP, we know this from our CRM. We know this if you work with folks like myself who put together solid data strategies. We know this stuff.


So, now how do we apply it? We take this as training data. Maybe this is all the customer journeys in 2017 and 2016. Now we have all new customer journeys. Even maybe with some new tags, it’s no problem, we can include those. We run the model.


Slide 29:


Here’s the model. Now, journey attribution is a little bit harder than working with a really superb data model like ObservePoint has for you ad data. But again, this is ObservePoint data with a little bit of extra sauce mixed in with customer and marketing. Check this out. I ran it through Vizadata again, took about five minutes. I bet it would have got a lot more accurate if I let it run longer, but I think the model converts pretty well. 74% AUC, so this is about like 50% more predictive in assessing the probability of a customer to convert through one of the journeys they’re on than randomly guessing alone. I would encourage you if you have attribution models right now, ask yourself, “How accurate is my attribution model?” “Are we using to improvise machine learning in training?” or “Are we kind of building some model with a lot of rules that change over time?” That’s what I would encourage.


You can see the deep learner in this case didn’t perform as well as the general linear model nor did some of the symbolic methods. The statistical methods seemed to perform the best.  


Slide 30:


Now, check this out. When you assess the, again I’m looking at the deep learners algorithmic attribution, but what you can assess the variables of importance down to the tag levels. What I see is the presence of BlueKai being true. The presence of the Smart App Banner, the presence of Moat, the presence of Resolution, the presence of AddThis, the presence of Facebook, are all positive indicators of credit towards conversion. These are really the weights in your tag based algorithmic attribution model. If I wanted to start assessing these, this is where it’s fun, is to assess these across the customer journeys. I can start to see which tags and which networks and which technologies are most important for conversion in the customer journey.


Notice I added as I said in a couple things that you wouldn’t see in a normal customer journey like Adobe DTM and Google Tags and stuff like that, but I wanted to illustrate with those just to show you how with a tool like Vizadata and a partnership with ObservePoint we can pretty much include anything.


Slide 31:


And now what I would do it I would take this model I’ve created, and then apply it to my customer journeys in 2018. Remember, my training data came from the past, say 2017 and 2016 journies. So I apply it to my current journeys, and I get predictions on the probability of that customer to convert on their current journey so I can then treat the customers that are more likely to convert better.


Slide 32:


So, you can do these same types of analyses on your ObservePoint data with Vizadat to:

  1. Learn from your past
  2. You can model what happened.
  3. You can understand what’s contributing to the prediction and why
  4. And then you can predict your future.


Use-cases with ObservePoint...compliance, tag issues, attribution and more.


And then there’s other marketing use-cases that this can be applied to too. Supervised machine learning, lead conversion, churn, campaign optimization, ad performance, customer response, sales response, recommendations, forecasting.


So, I hope you enjoyed this overview of how you can think about ObservePoint’s data in the context of machine learning and how you can apply different model classes like classification, whether binary, multiclass, and regression across different model types like gradient boosting machines and deep learners and symbolic methods like decision trees and advanced methods like stacked ensembles. To very quickly, with a tool like Vizadata, create a model that you can then apply for prediction and you can get some actions out and start operating with prediction and prescription versus just description and diagnoses going forward.


Slide 33:


So, that’s all I have for you guys today. I hope you enjoyed it. For more information, and if you want to learn anything more about what you saw today and how to do this quickly and easily, a lot of data scientists on your ObservePoint data feel free to reach out to me. I’m Judah Phillips, CTO and Co-Founder, judah@vizadata.com.


Thanks everybody, and I hope you have a wonderful rest of your summit and enjoy your next session.


Previous Video
Krista Seiden, Google - Measurement for Growth
Krista Seiden, Google - Measurement for Growth

Move beyond the buzz and learn what growth marketing really is.

Next Video
Eric T. Peterson, Analytics Demystified - Data Quality and the Digital World
Eric T. Peterson, Analytics Demystified - Data Quality and the Digital World

Drill into the tactical changes that companies can make to ensure the best possible data for their Enterprise