Analysts and other MarTech stakeholders can’t afford to skip regular testing, but they also can’t afford to spend all their time constantly testing every data collection rule after each and every website update.
With automated testing, they can accomplish both, while making data collection easier, more efficient, and more accurate.
Learn from Chris O’Neill, Solutions Engineer at ObservePoint, and guest speaker Ali Shoukat, Manager of Technology Innovation at Cisco, about how to:
- Implement automated testing
- Improve data quality
- Monitor and increase SEO rankings
- Enhance page performance
Fill out the form to view the webinar on demand.
All right. Thank you Aunica and thank you everyone for joining us today. Today we'll speak with you about the three ways on how to optimize your website and test automation. We'll share some insights on how we've handled data quality, SEO, and page performance.
One of the problems we had when it comes to data quality for a large website, at cisco.com, it was very difficult to kind of monitor how data tags are fired on the website, ensure those data are acquired correctly, and also know how data are are fired towards the buyer journey.
A website like ours serves 80 countries localized over many regions, serves almost 6 million pages and receives 8 million visits a month. On top of that, we serve about 4 billion data tags a year. Now this entire operation is managed by 4,000 internal and external users. So to overcome such a problem, we needed a digitized system that'll allow us to audit all data running on the website and provide an easy reporting mechanism. We also needed a way to validate the mission critical data tags versus which ones are supplementary tags and then identify first, second, third party data tags that are running on the site and critical to our revenue marketing. Last but not least, the solution needed a way to flag and report any gaps in data discrepancies or any issues with the architecture so we can fix it quickly.
Okay. And when Ali came to us with Cisco we're not only dealing with data quality issues, but we're dealing with data quality issues at scale. With Cisco being such a large web property in there, tracking being so complex and comprehensive, we had to first I think strategically how we were going to design the scan so that we could test and monitor all the key tracking KPIs, and take into account the scale and number of different country domains and flows that we were trying to test. A couple of other things that we needed to take into consideration when planning this out was to make sure that we could do all of our tags in production and pre production environments as well as test desktop versions of the web properties and trigger the mobile responsive nature of the property.
One of the first types of scans we did for Cisco is we wanted to identify all the tracking pixels or tags that were being loaded inside and outside of the tag management system. This required us to first run a scan canceling out the network requests sent by the TMS and then we could get a clear view of all the tags that were hard coded on the page and also that were piggybacked or being initiated by the hard-coded tag. And then we could rerun the scan getting all the network requests and see a clear view of exactly where every tag was being initiated from. Another big part of the project was determining the parameters of what the data should look like as we were running the campaign. So we had to set up alerts around key data points, making sure that the data was coming back correctly and we needed to do this in an automated fashion because of how large the Cisco web property is and all the data that they are tracking. Lastly we went through and we identified all of the Java script console log errors related to tags. So we were able to find any time there was an error or a warning or debug message coming through that would cause errors in the tags firing, which in turn would cause problems with the data quality.
One use case where this solution came to a great value earlier this year, ObservePoint alerted our team that our analytics data tags were dropping off the site. Now, analytics data tags are very critical to our digital marketing. We use those tags to monitor the user journey and the engagement throughout that journey. So not having those tags was very critical. Now with that automated solution we were able to quickly identify and diagnose the issue and then fix it and verify the fix in real time. That kind of solution and that use case helped us to save and improve on the quality of about 3.5 million marketing analytics records.
Now another opportunity to optimize the site with test automation was how to check and test for search engine optimization. Now the problem here for a site like Cisco.com that serves extremely diverse type of content and a large amount of content and with very segmented and disperse publishing operations over multiple geographical regions. That made it very difficult to kind of ensure and monitor the SEO data and metatags and up to standard for the side at least an automated way. A manual solution wasn't really an option for us. So we needed really an automated process. The idea of this process, is it can pull all types of content that exists on this site. That could include web pages, links, images, videos, documents and any other types of content. Now, after scanning that content, we needed a way to compare those results to custom predefined rules that can help us ensure SEO's up to standard. Now those who need it to be custom and controlled by the business, as the party changes, and even as trends evolve (as we know, keywords for example was important years ago, but nowadays there's more emphasis on description). So change in those trends and priorities is important for us. Now at least this solution needed to flag and report any content that fell under those SEO quality rules.
Perfect. Now most people traditionally think of ObservePoint of, you know, helping with the data quality around tracking pixels. But checking, testing, monitoring these SEO data points is actually another great use case where ObservePoint can help large properties, especially like Cisco where automation is required to gather and monitor this data. So one of the first things we did is we configured an ObservePoint test to scrape the DOM for the key SEO metrics according to Ali's SEO strategy.
We also captured the data points around the metadata related to the SEO. So a very simple, simple example would be we would scrape the DOM and let them know how many H1 tags were on each URL. And then also let them know what the metadata of those H1 tags were to make sure that that was all in compliance with their SEO strategy. After that, the next step was to now take a monitoring approach around all the key SEO metrics with regularly scheduled scans. And then we created alerts to flag any page that came back with any sort of invalid page element, or if any of the metrics, you know, number of H1s or the metadata related to these data points was outside of the parameters established by Ali and his team.
And then last there was another really nice use case where we were able to scrape the Dom and determine areas with the consent manager, the privacy banner, was not being loaded so we could just run a quick check. Once we configured ObservePoint to go to every single page and make sure that that privacy banner was loading correctly.
As Chris described, the use cases for success from this solution were really plenty. We as a business, as we expand our marketing tech stack and add to it the ObservePoint capability, really we're able to control how we want to see success if [inaudible] because of that customizability. One use case came ,for example, is the solution helped us to identify an outdated and in some cases really invalid 404 redirect. So, this was really problem for our site because it impacted how search engines effectively indexed and organized our content. So having that systematic automatic way of regularly scanning the rules that we identify what's important for us and the ability to customize them and then see them ahead of when damage was done was very helpful for us.
Now, another opportunity for testing automation is really how to check a page performance. Now page performance is really important for our website. We do a lot of user experience personalization and we serve a lot of relevant content and we have many journeys we serve. So making sure that our website is at top notch when it comes to experience and speed was very important. And that was really because of the high demand when it comes to volume and experience. Also we are kind of spread through many geographical regions and our business is different, in a way, from one a geography to another. This kind of gave us, and kind of raised, multiple challenges when it comes to page performance. It was very difficult to proactively know how fast it was on our site and get that data accurately, a feature that is reputable. One tool came to you to good use and really gave us some good benefit was Google Lighthouse. Google Lighthouse is able to give reports on how the page performs through SEO, or accessibility, and even speed and some cases tell us if there are any issues with this site. Right? But then the problem with that was very manual, and to make that useful for us, is to automate it.
Automating a tool like Google Lighthouse and give it the ability to scan throughout the many pages on the site was critical to success and make a value out of Google Lighthouse. Also standardizing that tool. Now another issue we found when we try to utilize Google Lighthouse is Google Lighthouse will give you a different set of scores if you run it through two different regions, even different users through multiple regions. So I, for example, in San Jose, when I run the report I'll get great numbers. My colleagues who run those same report to test the pages and the content in Asia, they will get extremely difficult numbers. So those discrepancies kind of gave us the need to, we need to standardize this. We need to make sure it's always consistent, from the same way, same device. And then obviously flag and report any low-performance inconsistence. So we can fix it in a simple, comprehensive way.
Yeah, this is actually a really exciting use case. ObservePoint, traditionally we have just reported on tag performance metrics and very lightly on page performance. Again, Cisco came in with Ali and their team and they're very forward thinking and how they understand page performance. They are in sync with Google Lighthouse and their methodology. Traditionally some people may think of page performance as a single metric. How long did my page take to load? But as you dive into it, you realize that there are all sorts of metrics that can determine what page load time is. First ping, last ping, DOM ready and you need to understand what, and define what, page performance is for you and what page load time is and what that means. Google Lighthouse is an excellent resource of understanding that better so that you can have hard metrics to judge your websites on and make sure that they're loading at the appropriate right speed. And there are a lot of variables where you're coming from, what type of browser you're using, the connections, all sorts of things like that that Lighthouse can help you get insight into. And Cisco actually really pushed us to build an integration to where we can go ahead and grab those Google Lighthouse metrics at scale. Today you can go ahead and look at Google Lighthouse in your Google browser on a page by page basis. ObservePoint actually built out a tool to grab all of those metrics at scale so we can point it at an entire property like cisco.com, and grab all those metrics that Google Lighthouse grabs. And then we went ahead and we imported that data into ObservePoint and married it with the data we are collecting around the tags and the SEO to get a 360 degree view on how the site is actually performing. And then you can now judge how the tags are performing in relation to the page speed. So now they're able to look and say, "You know what, Page X is having a slow load time. Let's go ahead and drill down. Let's take a look at the tags. Let's see if any of them have slow load times." So we can really understand what is the cause or where the cause of the slow load times are. So ObservePoint, now we can go ahead and we run scans on a regular cadence to Cisco, grabbing all the lighthouse metrics on every single page we crawl, pull it ObservePoint. We now can create alerts around any of those data points. And this allows Cisco to monitor their website from one platform in an aggregated view.
As Chris said, this is a big news and we're really excited. This partnership has been very creative to come up with this solution to serve. We're really excited about this because the idea is to make sure that the personalization experiences we provide on our sites really runs smooth and a top speed, consistently across the regions. So ultimately, the solution would really help us to sustain our page load times at below 4 seconds. Now, arguably speaking we want to be under three, but let's see how phase one would go. The goal at the moment, just four seconds. And so far it's been given us very good insights. So this partnership and this new creative way of expanding this automation through the multiple use cases been very helpful to push the limits on our website.
In closing, our partnership with Cisco has, you know, really helped us develop a QA practice around all the aspects of their site so that we can test and monitor and make sure that all the data that they are collecting is providing, the customer, well, the insights to then drive the customer experience in an optimized fashion for their sites. We were able to ensure and monitor all of the SEO quality with a digitalized approach. We were able to define specific areas, or flag certain areas, that they want to improve based on the metrics that they provided. So their KPIs that they're looking at. ObservePoint obviously is able to expand and scale this testing for property as large as Cisco and proactively monitor it on a regular basis. And all this obviously was able to be done, you know, much quicker than a manual effort. ObservePoint was able to automatically scan the site, grab the data, process the data, and then create alerts around all the metrics that were important to Cisco.