Daryl Acumen, Hewlett Packard Enterprise - Keeping Your Analytics Clean and Consistent Across Business Units

November 22, 2016

Slide 1:

It’s been a great afternoon and a great morning. We’ve had some great speakers so I’m excited to be here and share with you some of our experiences from Hewlett Packard Enterprise.

First of all, let me give you a little bit of background about Hewlett Packard Enterprise. As some of you may know, about a year ago, one of the 10 largest publicly traded corporations in the world, Hewlett Packard, split into two. It was a major event. HP has been around for just ages and we’re one of the reasons Silicon Valley is where it is. But we decided that our strategy would be better executed if we were to focus, so we split into two different companies. The first, which most people know about, is HP Inc. We’re number one in PCs, number one in ink-jet printers, number two in consumer PCs, number one in LaserJet, number one in graphics. It’s a tremendous company. The other one is what I like to say is all the things people don’t know that we do. Again, a Fortune 50 company just like HP Inc., but we’re number one in servers, number two in services, number two in networking, number four in storage, number one and two in several software categories. So they’re both Fortune 50 companies. Hewlett Packard Enterprise, as far as people are concerned, is the largest of the two with 220 thousand employees.

Slide 2:

Just to back up a little to go back to the actual evolution of analytics and analytics tagging at Hewlett Packard; first we got to start with the original deployment of Omniture SiteCatalyst. HP signed with Omniture back in 2002, not long after the name change. Initially it was a business unit driven deployment, and it was sort of like the wild west. Individual business units who want the tags would engage directly with Omniture and would come up with their own standards. There was some guidance from within Omniture, from our implementation consultants, but basically it was the wild west. People would do whatever they wanted.

About 2005 we realized that that just really wasn’t scalable, it just wasn’t going to work. So we came up with the idea to unify all of our tags and to have it managed by a globalized, central analytics team. We came up with a central JavaScript, a bootstrap file, and we developed our own internal home-grown tag management system, and that worked for a really long time. But over time, things started to get fragmented again. Individual business units and teams would ask for their own specific implementations of the tags, so you would have splinter groups and fragmentation and it became a mess.

Around 2010, we came up with an initiative called Clean Sheet. This was our chance to start over fresh—to wipe the slate clean, to sort of shake up the box, reset our tags, our variables, everything—and start over. We even added a new website design, it was a beautiful new, white design. We had a goal of getting to 100 percent deployment in Clean Sheet within a year or two. We got to 80 percent, but then around 2013, things changed again.

At this point we realized that the home-grown JavaScript bootstrap file probably wasn’t the most scalable solution either, so at 80 percent deployment of Clean Sheet globally, we decided to go with an actual third-party tag management system called Ensighten. We were starting to deploy this globally, and first we said we wanted to hit all major continents and regions. Then we said, no let’s just do North America and Asia Pacific. Then let’s just do The United States and China. Then we decided to drop China, and you get the picture.

Around 2015, when we were moving forward with the US deployment, we had another reset. Meg Whitman, our CEO, decided that it was time for us to achieve greater focus—to split the company in half.

Slide 3:

So we never really got it right, and this is the horror state that was the result. From a tag management perspective, we had multiple solutions. We had Ensighten, of course, in The United States, once China got put on the backburner. We had our enterprise business group, which was about to become the new HPE company, and we decided to go to another solution called Tealium. But then we had the majority of our sites, pages, and regions, which were still using the legacy JavaScript. Then of course you have that random, rogue guy out in Japan who decided that he wanted to do an Adobe DTM tag management pilot. We’ve got four different regions—EMEA, Latin America, North America, Asia Pacific—all who have their own unique strategies and priorities.

From an actual tagging perspective, we have Clean Sheet, we’ve got our legacy JavaScript, we have the Atlas Standard—which was basically Clean Sheet modified to work on our store pages—and then we’ve got Clean Sheet Wash for those regions and business units that wanted to move to Clean Sheet, but didn’t have the skills or resources to make the change, so we gave them a way to sort of back into that.

We have multiple business units: Imaging and Printing Group, Personal Systems Group, Enterprise Business Group—which was going to become HPE. We had the HHO store, which was an interesting group because the store has a tendency to think they do 100 percent of the revenue of the company and never quite internalized the fact that actually Hewlett Packard does the majority of our sales in the channel, people who buy their laptops at Walmart and Best Buy. We had E-Prime, which was an enterprise version of the store. We had the B2B store, where people could come in and both small and large business could refill their ink orders, etc. Then we had the Latin America store front, which is a group who didn’t have the resources to deploy Clean Sheet or our Atlas Standard, so they went out to a third-party to do their implementation and had a store that, frankly, wasn’t even tagged with Omniture.

Speaking of Omniture, an analytics tool, we had SiteCatalyst, which we like to think was deployed globally, but EMEA decided that they wanted to have more control over their tags, so they went out and signed an agreement with Google Analytics. Now we’ve got an entire region using Google Analytics. Latin America was considering it as well. But then we had other teams inside using PIWIK.

So we have three different analytics solutions, four different tag management solutions, at least four different tag standards, four regions, multiple business units—it was just a mess.

Slide 4:

Out of this, we decided that in order to make sense of all this, the first thing we’ve got to do is an inventory, an audit if you will, of where our JavaScript is globally. So we engaged with a company called ObservePoint. Our team is in charge of governance standards within the company and we would manage this technology, and funded it, and would engage with our business units, sort of as an internal consultant to basically find out the state of tag deployment, so we could know what our next steps were. These are some of the initial findings that we had. We found out that six percent of URLs were missing Adobe Analytics in the first place. Seven percent of URLs were going to broken pages. Three percent of the URLs had JS errors. Multiple important property variables were missing. This is all pretty much what we would have expected, maybe actually a little bit better than we were expecting, but it gave us a road map and helped us understand what our next steps were.

Slide 5:

This is a snapshot of a different section, an enterprise business soon to become the HPE section of the site. 12 percent of URLs redirecting to a Page Not Found. 10 percent of the URLs had JS errors. This is just to give you kind of a sense of the kind of information that was guiding us. Then we started to bump into some shocking, fun insights that we could take immediate action on.

Slide 6:

This is the first example. We were going through the storage section of our site and we realized that 52 different pages that we had in several instances—8,000+ instances—were populating Evar 54. A little bit of a backstory: Evar 54 is actually our paid search tracking evar. The purpose of Evar 54 is to take a tracking code that’s passed from a paid search link, dropped by our paid search agency, populate that into a variable that’s passed over to our paid search tools of record. At one point I think it was Kenshoo and I think we’re using DoubleClick now. Seeing this populated on a page is a problem because the audit was not going out and clicking on paid search links, so we decided to dig into this. We looked into the particular pages in question—and this was a huge red flag—what we discovered was that these Evar54 tracking codes were actually being populated inside of href links on the storage homepage.

Turns out, what had happened was, when we were going through the redesign, of our Hewlett Packard Enterprise site—we were about to launch the new site—some of our designers from our design agency, and I’m not going to name names, but they didn’t have a full site map and they didn’t know where some of the product pages were. So they did what most of us do, they went out and googled it. And if you’re googling what a page should be, what’s the authorized page for this product, the most reliable link is usually a paid search link. What they did is they went ahead and clicked on the first link that they saw, the paid search link for this particular product—or actually couple of products—grabbed the URL, embedded it into the storage homepage, and voila! Now you’ve got this tracking code embedded inside of our homepage.

It actually wreaked havoc because now we’ve got this paid search agency that’s jumping out of their skin and really excited saying, “Wow, this is great! This particular paid search campaign is driving all kinds of click-throughs and revenue. It’s doing wonderfully!” But it’s because that tracking code for that paid search campaign is embedded in the homepage. So finding this was a huge, huge win for us because this could have been bad. We could have gone on spending tens of thousands of dollars, hundreds of thousands of dollars, funding a paid search campaign that, frankly, might not have been performing anywhere as near as it appeared.

Slide 7:

The next thing that we discovered through our auditing was some issues with our internal search. Around the time of the separation, which was November one, twenty-fifteen, we discovered that our paid search conversions were dropping off. A conversion for paid search is basically a click-through, that’s what we consider an internal paid search conversion. The click-throughs from internal search were dropping off, and we had to figure out why. We ran this audit and what we figured out was that the internal search tracking codes were not being populated on our support pages. That’s a really bad thing.

Let me back up a little bit—for our internal search, you go ahead and you look for your particular keyword, but when you click a link, we pass a tracking code that identifies the type of link you clicked on—we’ve got regular search, we’ve got one-click, and HP Recommends which are our versions of internal paid search—and that helps us identify and understand which links are working, etc.

It turns out, that around the time of separation, our Support paged decided to do a redesign and they didn’t have funding to do full Clean Sheet implementation so they did an abbreviated implementation and decided to economize. One of the thing they decided to save money on was tracking code linking. In their view, “Nobody drives traffic to internal search, so we’re just not going to use the tracking codes. We’re just going to omit that from the JavaScript.” Well there’s one thing that does use tracking codes and that’s internal search. What basically happened was, we were getting no visibility into the clicks that were going into support from internal search.

Think about it—this is Hewlett Packard, for crying out loud—how many people search for drivers and manuals on our site? This is about half of our internal search volume. We lost visibility into the click-through rates on 50 percent of our internal search volume, which was a really big deal. We wouldn’t have found that if we hadn’t implemented and actual tag auditing solution to help us take an inventory.

Slide 8:

The next one—and this is actually a really big find—was internal search. Right about the time we launched our new Hewlett Packard Enterprise site, we discovered that natural search traffic was dropping off, and it was alarming. Our organic search tool was telling us the drop-off was as much as 50 percent. And to make matters worse, now we’ve got our C-level executives—out CMO and our CEO, Meg—are starting to ask, “We’ve got this new site, how is it performing from a natural search perspective?” And we’ve got to tell them it’s dropping by 50 percent. That’s ridiculous. At this time, we had 267 thousand employees. Orlando Florida has 270 residents. Just our internal employees traffic alone should have carried the site, but it wasn’t and we were freaking out. So we did a target audit to figure out what was going on.

Here’s what the audit found. For a little bit of background, we use an external organic search tool called BrightEdge. We launched this right about the time that SiteCatalyst v15 came out, and the way you want to integrate is one report suite per country so that you can get visibility in our tool for organic search for every single country. Well we didn’t have a report suite for every country so we innovated and decided to use the new v15 segments. We created a segment for each country and an integration for each segment. It was fantastic. It worked very, very well. We implemented within a week–high fives all around—and everybody was happy.

The segments were built on top of a prop 7 and prop 1, which gave us country and language in a particular format. Here’s sort of what happened, usually we would “cs:” and then country code, so Clean Sheet colon and then “us,” and then the segment would look at the last few characters of the prop to determine what the country was. To grab the last two, you know it’s “us,” and then the United States goes into the “us” segment and then, boom, we’ve got SEO performance for The United States. With the separation and the new design, we decided once again to economize—why do we need two different props for country and language? We decided to concatenate them together to Evar 01 and just make it US so country, then colon language.

The fallout from this is: now the last few characters of these variables are no longer telling you the countries, they’re telling you the language, which means the segments are failing, which means the traffic for these countries is not making it over to our SEO tool. For some countries it worked out where if the country and the language were the same thing, like Japan for example—Japan and Japanese—then it would work. But for the US, where the country code is “us,” but the language code is “en” for English—which is England, of course—it broke. So our organic search tool was telling us that our traffic was going down by 50 percent, which is causing us to panic and this is something that we really didn’t want to tell Meg—that our new site is just tanking.

Slide 9:

Fortunately, we were able to do a spot audit and identify the source of the drop. Once we got all these examples of the value of our auditing practice in place, it was time to build a long-term plan. What ended up happening was: we met up with our director, we sent a proposal to our vice president, and we said, “This is what we want to do. Can we have the funding? Do we have executive buy-in for that?” And they agreed.

So these are the step that we decided to take. First of all we were going to assign ownership to data quality, and we would take that on ourselves. So that would actually be the Customer Marketing Analytics Group, my group. The next thing we were going to do was formulate a data governance plan, and we actually brought in some external consultants, who have expertise in this, to help out. We worked with Adobe in particular to make sure that we had funding for best practices here—to make sure that we understood which regions, which business units, which areas, which variables, were mission critical, which were optional—basically to take our solution design document and codify it into a playbook.

And then audit against the plan—have recurring audits from our tag auditing solution that would let us know whenever a particular business unit or region or section or sub-site or marketing microsite diverged from the plan. We executed audits in QA, staging, and production so that we don’t just catch mistakes after the fact, but actually catch them before they would go live. And then monitor key user scenarios. This is something we obviously had to do in partnership with our internal strategic stakeholders and software, servers, etc. We would identify key stakeholders and digital and, of course, channels—paid search, organic search, internal search, display, etc.—and have them fees us key user scenarios. We would monitor those and make sure that they were all in compliance.

That’s the go-forward strategy that we’ve implemented now at Hewlett Packard Enterprise, and so far I think we’re having a great success. We’ve had tremendous feedback from our stakeholder directors, from our executive sponsors—everyone is very engaged now because they’re excited about the prospect of actually having confidence in the data. And you can’t have that unless you’re actually checking your work.

Previous Video
Judah Phillips, SmartCurrent - Data Stewardship in Digital Analytics
Judah Phillips, SmartCurrent - Data Stewardship in Digital Analytics

Learn how to appoint a data steward and create more value with analytics.

Next Video
Lee Isensee, Search Discovery - Hey Siri, What Are App Analytics?
Lee Isensee, Search Discovery - Hey Siri, What Are App Analytics?

This session will cover the evolving market of mobile and app analytics, and how to use that data to succeed.