Chris Peters - Data Driven Mobile App Product Design

October 18, 2018

Data Driven Mobile App Product Design

By Chris Peters of Cognetik

Slide 1:

Okay thank you very much. I’m excited to be here. Hope everybody’s enjoying the conference. Excited to talk to you about a topic that’s near and dear to my heart. Notably mobile app analytics.

Slide 2:

I’ve had some experience setting up practices with a couple different clients, and I wanted to go over comparing and contrasting of how mobile app analytics differs from your traditional web analytics. We’ll start with that comparison view, and then we’re going to start going into some best practices of setting up mobile analytics practice. These are things that even if you’re not in the world of mobile app, that are really some best practices that I’ve found along the way. So don’t feel like this is a presentation for only people doing mobile apps. A lot of these things you can take away to your standard web practice, but they become even more important in the mobile app analytics worlds. We’ll touch on that in just a second here.

 

So, we will compare and contrast, we’ll talk about building a practice from scratch, and then we’ll get into some of those best practices that I’ve talked about, and then we will talk about how you can really close the loop and understand where you’re analytics are actually driving revenue and driving changes in a mobile app. We will talk about a conclusion and then have a Q&A at the end.

 

Slide 3:

So, let’s dive right in to the world of mobile apps. How is this different from your standard web analytics? Web analytics, tag management, it’s sort of been commoditized at this point. Tag management is fairly easy, it’s a known quantity. In the native app world, don’t do it without a data layer. I mean that’s the experience that I’ve had. You use different products over the years where you can tag outside of release cycles and just found that data points are extremely inconsistent, and some of the breakage that you see in development work is even more prominent in native apps. So definitely do not do an analytics practice with native app without having a data layer in place.

 

Now, web, going back to the other side over there, also don’t do it without a data layer. But it’s also much easier to tag new features on the fly and sort of be agile and nimble. Whereas native app, a lot of times your data layer is going to be dependent on a release cycle. So you have to really plan ahead and QA thoroughly, using a tool like ObservePoint, but that quality becomes even more important because you don’t have as many windows open to really affect the data that’s being collected in your data layer. So, plan ahead. Have a really solid SDR.

 

Going back to the web side again, one of the other things, web has very mature A/B testing ecosystem. And it’s very easy to test your concepts quickly, on the fly. You can take an idea that you find in the data and be testing it within a week or so. The app side, often times, it’s not as easy. You have to get into queue, and you have to have developers spin up testing experiences. They’re really just, you may not have as many of those resources that are versed in that at your disposal. On a web A/B testing platform, there are a lot of people out there that know javascript that can spin up tests for you quickly. App, you may have a little bit harder of a time spinning those up quickly.

 

I mean really, overall, web is a known quantity. It’s easier to move from insight to a test idea to a live winner. On app, it’s a little slower. It’s a more complex ecosystem. You have to be a little bit more careful about how you’re moving from idea to implementation.

 

And so, that’s what we’re going to cover in this presentation today. Sort of those best practices to make sure that the idea you present today going through the data, that may not get implemented for another six months, to make sure that you don’t lose that in the shuffle and lose that ability to tie your insight to the action.

Slide 4:

 

So, that being said, I’m going to give you guys all a quote here. That anything you lose in terms of speed and agility, should be made up by an insights driven approach. It should be data driven intentional, even more than a web practice.

Slide 5:

 

So, your working with these stakeholders, ya know maybe you haven’t worked with a product design team in the past. How do you cut through and build that trust and credibility? And you’re talking about people that may have data sources that they rely on. You know, they’re own way of rigorously thinking about how to design new features.

 

So, I wanted to share some best practices that I’ve seen along the way to cut through some of the initial inertia when you’re working with a new group. So, some of the things that I’ve seen when starting to bring your analysts and analysis in with a new group is to always make sure that you have a customer service mindset at heart. You know, I’ve seen analysts that I’ve worked with in the past, fall in love with an idea. Or fall in love with something they’ve seen in the data and really try and push that concept. And you know, at times, the stakeholders don’t always align with the same thing that the analysts are trying to drive.

 

So, especially in those early on stages, you want to make sure that you are having a customer service mindset. You’re making sure that you’re aligned with the priorities of your stakeholders.

 

The other thing that I saw that sort of helps cut through some of that initial inertia is along the same lines there, you want to find the biggest priorities of your stakeholders and then become laser focused on solving those problems. So, make their priorities your priorities. So, you know, say optimizing an ecommerce funnel is the one thing that this group of product design folks is measured on that year. Make sure that you’re helping to optimize and analyze that ecommerce funnel. And stack those quick wins up because if you can show those quick wins, you may know that there is a problem with onboarding and you’ll have the credibility to share that information once you have it. So, stack those quick wins, build that trust early.

 

And another thing that I’ve seen that sort of helps build that culture of data in a product design world is to have a consistent analysis share out. Make sure that it’s every week or every other week type thing where you’re doing deep dive analysis around the product. I’ve found that a lot of times when you set up those cadences like that, you sort of hold yourself to deadlines where you can make sure that the velocity of your analysis, especially in those early stages is really, you’re churning through a lot. So those cadences will help sort of keep you to that.

 

And then the other thing I think very early on in any sort of relationship with these stakeholders is to focus on universal language of how the analysis that you’re doing is estimated to impact the bottom line. Very important. I think it helps build easy alignment. You’re not talking in terms of any esoteric KPI’s. You’re talking about money and then something that every business can relate to across all the different groups.

 

And then we will talk a little bit more about strategies for this last one a little bit later on. Follow through on actions all the way to completion. You may have done a pre-analysis on a product feature. Always make sure that you have a post-analysis and you really measure how the results were after the solution was devved into the native app.

 

Slide 6:

 

So, I wanna go into some of the best practices that we’ve seen along the line. So, we talk comparing web and app a little earlier about how it may not be as nimble and agile and it may be harder to get into a release cycle. And because of that you may run into some data quality issues. And I know as an analyst that can be one of the most frustrating things is to run into data quality issues.

 

But in terms of a mobile app, and especially with a longer release cycle like that, don’t let data quality issues grind things to a halt. If you have a robust SDR, nine times out of ten, you should be able to proxy certain things that are going on, work around other data issues that you may have, so an analyst that’s working at a mobile app, needs to be technical to a degree. They need to lean into looking at QA logs. They need to know how to set up a proxy through a fiddler or charles. They need to know that SDR inside and out. They should be using those smart workarounds.

 

I have a picture here in Adobe Analytics of a custom segment. Sequential segments are your best friend because you can infer the path that somebody took to get there if say, you’re missing a screen view right in the middle, but you know that the only way to get down a certain path is to go down the path where maybe you’re missing a data point or a screen view is missing. So, sequential segments are one of the greatest things to help you work around data issues. So that’s another strategy that analysts can use to get around data quality issues.

 

The next bullet you see there, “find scrappy, flexible analysts (or become one yourself).” The thing that you never really want to hear is someone bristling that the data quality isn’t there, it’s impossible, we can’t get to it. If you’re analysts are scrappy and they’re thinking about how to find a way rather than the problems that are at play, that will help so much in the mobile app world. So, scrappy, flexible analysts. Teach yourself to have that mindset.

 

And then if you have gaps, supplement your data sources. You know, you may have a gap that could be enriched with back end transactional data. So know how to work around data gaps, but know how to supplement using other data sources. So don’t just be myopically focused on the quickstream data, and any data quality issues that you have. You may have other sources in the data that would be a good substitute or that can enrich areas where you have gaps.

 

Slide 7:

 

So, we talked about data quality. Say you’ve taken all those best practices from the previous slide, you’re working around data problems, and now you’re really getting into some rich feature analysis. So, some of the strategies that I’ve found to find feature that are right for optimization in the mobile app world is to really do a research study on your main elements of content and look at the propensity of your bucket of features and the propensity to convert if somebody interacts with them. And then you can start to see some of those features that under index in terms of conversion start to pop.

 

So now you have some of the more impactful features on conversion. And now you’ll want to rank those in terms of importance along with your KPIs. So, navigation to start the ecommerce funnel. Maybe under indexing because from one of your most common landing pages, users can’t find the best find to do that. So, you want to rank that in terms of the biggest area of opportunity that you have.

 

Focus on the ones that add friction to your funnels. Funnel report is always a great way to understand where some of that inertia is happening. And then start to tell the stories with your secondary KPIs. I always think in terms of any sort of analysis, start with the top KPI. Your conversion rate, your revenue. And see where things manifest. But then start to tell the story with the secondary KPIs. As you start to laser focus in some of those features that maybe have issues. So, you may have an onboarding problem. And then you can start to tell the story that as that screen acts, your exit rate jumps. So, there are probably some elements that aren’t as easy for the customer to use on that screen. For your secondary KPIs are really going to come into play where you can start to add context as to why your overall KPIs are suffering.

 

And then, once you’ve sort of isolated out those problems, you’ve found the important features, you’ve found the secondary KPIs that isolate the problem, make testing recommendations. The analysts that are on my team, I always make sure that before we share a deck out with a client, there is some sort of testing recommendation and takeaway for each prominent data point that’s shared. You don’t want to just show up and throw up and have a bunch of data that may be very interesting, but isn’t grounded in any sort of action. So, testing recommendations are always a good idea to add into your decks.

 

And then, repeat this whole process over again as you start to develop a more optimized product.

 

Slide 8:

 

The next best practice that I want to share in terms of analysis and product features is segmenting and personalizing. So, you want to find those most essential times in a customer’s journey. So, segment out users that are very new that are part of the onboarding process. And start to understand those specific areas and which features are important at that time in the user's lifecycle. You may look at the feature of a social registration for a mobile app. And that may be very important during onboarding, but really after that point, it’s not important again. Feature Y may then help move somebody from that onboarding to the next step. So really start to segment down to those particular times that are important to a customer. I found a lot of mobile apps, the idea is to use them often. So, onboarding is a critical point.

 

Discovery is a critical point. That initial navigation after onboarding. Are they actually finding the content that they want? Then converting is your next step. And then returning. I’ve found that looking at cohorts is very important. Looking at rolling views of customer age. Looking at lapsed customers becoming active customers again.

 

So, these are the important things to see how these different groups are interacting with a mobile product. And I think it will really enrich your data as it will help you to start build your groups to personalize. Some things that we’ve done in the past, we’ve looked at people who haven’t purchased within the first two week, a critical time in onboarding. We group them into “at-risk” buckets, and then we started to run some tests that were targeted for only those at-risk customers. And we were able to measure how we impacted that group over time. So start to build KPIs off of these different important times in a customers lifecycle.

 

Personalizing is great if you’re not in a world where you can easily personalize. Constantly simplifying that user experience is a very nice back up plan. Again, how we talked about waiting for perfect data, don’t wait for the perfect world to have all of the tools at your disposal to do this stuff in an automated fashion. Sometimes take those small wins, and if you can’t personalize, start to focus on simplifying.

 

Slide 9:

 

And focus on insights. Don’t focus strictly on the data. Don’t make the data more complex than it needs to be. Talking about the complexity of the data often will not help your stakeholders, it may have taken you so many hoops to jump through to pull this data out, especially in a mobile app where you had to do work arounds and all sorts of different things. But that detail doesn’t need to be shared.

 

You should also prescribe specific actions and describe, tie it back how it’s going to cater to that customer segment.

 

So, I have an example here of how you can sort of tie these things together. So, we see, because infrequent purchasers are leaving the checkout screen and returning to home to click to a promotion at a 35% rate, test placing a promo CTA component directly on the checkout screen. So that ties our segment, it ties some of that data we saw with people leaving the checkout screen, to a specific action that a mobile product team can then go act off of without having to really take all the different data points and sort of synthesize them together. Make it easy for your design teams to understand the true impact of what you’re sharing out.

 

And don’t be afraid to make those recommendations. I think it builds trust. I think it shows you’re thinking on the same terms as your stakeholders, and it helps you re-focus on why exactly this data is important.

 

So that, again, it’s not something that’s just in the mobile app world. It’s something that I feel like focuses those analysis share outs in web and any channel that you’re doing analysis for a stakeholder.

 

Slide 10:

 

And then, this is where we will get into a really important point for a mobile app. We talked about longer release cycles. Assisting on priority should be one of the key things that an analytics team does to support product design. You know, there may be a bucket of new features that groups have an idea to work on and all of them may see important and time sensitive. If it’s all important, then none of it’s important. So, as you’re building out analysis for product stakeholders, catalog these testing ideas that you have buy-in from your stakeholders. And really track them over time, and make sure that they’re not lost in the shuffle over time.

 

So, strategies that we’ve had in the past, is to just have an Excel sheet or google doc where you’re listing out all of the test ideas, scoring them, and starting to help your stakeholders prioritize which test to run first. But if nobody else is maintaining something like that, if you don’t have a robust optimization team, as the analysis group, it’s a good idea to catalog that and maintain that because really it’s in your best interest to see those testing ideas through to completion. Don’t just leave the interesting point just floating out there.

 

If you don’t have a testing platform, take action on these optimizations anyway. Sometimes the data can tell you enough that you know the right optimization to make, or you can potentially try an optimization out and do a pre-post analysis and see if that’s working out. It can always be reverted back, but make sure if you don’t have a testing platform that you’re doing the same thing with optimization ideas.

 

And then whenever possible, and this will really help with your tracking here, provide a range of estimated revenue and path. If you’re disciplined on this ranking, testing ideas for optimization should never be a problem because you have a dollar sign attached to what that insight should help lead to.

 

So, your slower development cycles make this extremely important. There’s no room for not being organized and not capturing this level of prioritization over time.

 

Slide 11:

 

And then always do a pre-post. Ya know, it’s funny, you hear the adage thrown around a lot “if you can’t measure it, don’t build it.” Well the same idea should be applied to analytics practice. If you can’t measure the success of the value that you’re bringing, don’t build it. Now with that being said, you should always measure the success of what you’re doing. I mean you’ve done some hard work, you’ve identified problems, recommended solutions, made estimates, aided in prioritizing those solutions, patiently waited in your development queue.

 

But you’re not done. There should always be a post-analysis. You should look inwards and see if your expected range of revenue was actually correct. Did you achieve what you were expecting? And if you didn’t, you need to understand why so that next time you come with an estimate and a prioritization of a bottom feature, you know how to do it better.

 

So you know the pre-post is not just for “hey you know we saw a win here.” It’s to really make yourselves better analysts and understand where you may need to tighten some things up. You know so you may have some losses, but you may have some wins. And if you’re doing this correctly and staying disciplined with it, if someone comes and asks you what the value of the mobile analytics practice is, you’ll have a dollar sign right there. And say, “we took this insight that nobody was really looking at, sat in the development queue for maybe longer than we wanted, but by the end of it, we were able to say that this insight actually drove this dollar amount.”

 

So, I think that’s so important and it’s so easy to get lost in the shuffle of you’re busy, you’re churning through analysis, but if this is something that you consistently do, you can always talk about the worth of the program. And you can always point to some of those specific wins as you’re potentially growing your practice.

 

Slide 12:

 

So, in conclusion, web and native app are two different beasts. A lot of the same best practices apply, but your margin of error is smaller. You want to be organized and careful in prioritization. You want to build a sustainable and durable process for measuring the wins that you’ve stacked up. And ultimately find that return on your investment for your efforts. Go out there, find that RIO, and make some exciting mobile app changes.

 

Thank you for your time. Now we’re going to do a brief Q&A. 

Previous Article
10 Mobile Deep Linking Use Cases (with a Quick Intro to Deep Links)
10 Mobile Deep Linking Use Cases (with a Quick Intro to Deep Links)

This article discusses 10 mobile deep linking uses cases, as well as introduces the need for app owners to ...

Next Video
7 Strategies for Mobile App Analytics Testing in 2018
7 Strategies for Mobile App Analytics Testing in 2018

Mobile app testing is still a bit like the wild west, reign in your strategy with these seven best practices.