7 Strategies for Mobile App Analytics Testing in 2018
Thank you everyone for being here today. This is a really exciting day. I’ve seen some good presentations so far and I’m excited for what’s going to happen the rest of the day. My name is Sun Sneed and I’m the manager of product here at ObservePoint. I really appreciate you taking the time to listen in today. I’ve been in the product implementation and development space for over 10 years, and here at ObservePoint I head the product team. Last year, I had the opportunity to launch a very innovative product to help automate the tedious process for analytics QA for mobile apps. I’m so excited to share some of my thoughts about testing and mobile app analytics today. Let’s get started.
Mobile apps, as you for sure know, are more relevant than ever. I looked up some numbers for you. Just in this quarter more than 26 billion apps have been downloaded worldwide. That is an eight percent year over year growth, which is pretty impressive. The worldwide spend—and that’s even more interesting, I think—outgrew those downloads by three times and summed up to 17 billion in the last quarter. The time spent in apps, which is something that helps drive business within apps, grew over 40 percent on Android to a total of 325 billion worldwide hours in this last quarter.
What we also see a lot in the market is a mobile-first development. A lot of times companies actually develop for a mobile experience and then export it to the web. I’m sure you heard last year how the mobile search traffic outgrew the web search traffic. Then finally, since thanksgiving is around the corner, I just pulled out the number from last year, and mobile shopping continually was on the rise. Last year, Black Friday was the first ever to generate over a billion of U.S. online mobile sales. I just looked up the number for Amazon Prime Day and the revenue for Amazon Prime Day was bigger than the total of U.S. retail revenue last year on Black Friday, so pretty impressive.
Enterprise apps are a new growing market. You will find a mobile focus will continue to grow in 2018. There’s always this discussion: is it going to be apps or are apps going to go away and it’s all going to be in the browser? I think everybody can have their own opinion, and right now, nobody really knows. Personally, I think as long as Apple is doing good business in mobile apps, mobile apps are here to stay. We will see what next year will bring.
When you’re thinking about mobile, mobile unfortunately is not easy. There are a lot of challenges. When you look into the data, Localities for example, has a statistic out there saying that 25 percent a quarter of all app serving downloaded are only used one time. And 58 percent of users churn in the first 30 days. These are pretty high numbers. We have these impressive downloads, we have a lot of people wanting to spend time in apps, but then they’re churning away. This emphasizes the importance of good analytics and a good strategy that will help you keep your customers loyal and in your app experience.
We looked into the Forrester data and only 43 percent of digital business professionals today employ mobile app analytics. Only a quarter of all app developers include crash reporting in their analytics on the apps. There’s a big opportunity in the market to use analytics, to use these SDKs, to help learn from what’s happening in your app and do better. So, what are the challenges? There are technical challenges, and I’ll go over these in just a bit. Don’t forget the process and the people. You’ve got to take the whole team onto these challenges. And you have more empowered users on your mobile phones.
Luckily, we’re here to talk about 7 ways to win inside of your mobile app experience. One is: address the device and operating system fragmentation in your testing strategy. We’re going to talk a little bit more about the technical challenges, and really understanding what the market looks like and how you can test best for this market. Manage the size of your app and the chattiness of your app. Testing early and often. Testing along the process from development into your market. Use mobile analytics. Testing your mobile analytics implementation. And lastly, start with the end in mind. Why are you doing all of this?
Let’s deep dive into the first technical challenge around device and operating system fragmentation out there. I think the thing for you to remember is performance and stability matter a lot to your customers.
Why is this a challenge? Because out there, you have so many different operating systems. The two prevalent, obviously, are Android and iOS. However, there is a current Android version, there is old Android version, there is even about 20 percent of users out there that are still using an Android version called KitKat, so 4.4. when you think about this, we now just introduced 8, which is Oreo. When you think about the spread of operating systems, you can think about how many different functionalities bases you have to deliver as you’re creating your app experience. If you want to do this in a good way, you have to have a good testing strategy to tackle all these fragmented code bases.
In your organization, you will usually have a team for your Android development and you’ll have a team for iOS or you’re working with agencies, they might even be different agencies. It needs a different skillset of coders, and usually with that we’ll see that you’re working with different teams. You need to be able to translate your vision to these different teams, and you need to be able to communicate your requirements in a way that all these teams can work together and create that wholesome experience for all your users. Then lastly, if you want to do good testing, you also need to get a hold of all of these app files.
This is a good way to illustrate how fragments the device market for Android is. When you think about this and you how you want to test for all these versions, you cannot get around test automation.
Another good source that I like to visit is Android’s developer page. And I was alluding to this—you see KitKat, Lollipop, all the versions still have a lot of outdated users on them. A chunk of users is still on Marshmallow. Oreo really has just started in the market, Android 8 was just launched, so this data is not really showing much of the growth there.
Coming back to our challenges. How do you solve for this? First and foremost, try to be involved during development. Do testing during development. Make sure you know the people who are working on your mobile app experience and talk to them so they can understand your requirements better. Also, you’ll typically find yourself working in cross functional teams. Don’t be a silo. I know, especially for Android tests, it’s often difficult to have a seat at the table from the very beginning, but it’s so valuable to be there from the start on. Some companies even have digital governance strategies that enforce the seat of the analyst on the table. I know people these days talk about having UX seated at the table, but I think analysts should be there just as much.
Then make sure that you are able to test before the release because once it’s in the store, once it’s in the market, it’s just so hard to bring changes into the market and into the devices of your users. With a web page, it’s fairly easy to push out a new release and have everybody on the newest status. With mobile apps, since it’s a piece of software that is installed on your users’ device, it’s just so much harder.
Then make sure you employ market monitoring. Look into your crash report data, look into your analytics data, and do not forget very valuable feedback channels, specifically the Apple iOS Store, the iTunes Store, and the Android Store. This is where people leave their feedback. Often when you go there, and you see your ratings, and you can react to a low rating, you can turn around a bad experience quickly enough that it won’t affect your business in a bad way.
Another tip could be to consider comparing your app store download data with the incoming analytics data and find if you have any inconsistencies, which could be telling you that your analytics are not firing the way you would like them to fire. Or when you choose your analytics SDK, make sure and figure out how you want to manage your data later. Do you have a unique identifier that is more than just the same API key so you’ll be able to find your user across different devices?
Another big one is the size of an app. Size does matter and it also matters how much your app actually talks with your backend because that will ultimately drain the battery of your user.
Questions can be: how many SDKs does your app actually employ? How chatty is your app? How many times does it call home? What is the effect on battery drainage? And what’s the overall file size?
Why is this so important? Because customers care about the space they have on their phone. I have a Samsung phone and there’s only so much space that I could put apps on, and if that space gets fuller and fuller with my pictures, I’m willing to kick off and app experience more than to delete my pictures.
When we look into the market data, I think this is really interesting. What are the typical tools that are employed in the market? You can see here with developers; Google Analytics seems to be the winning tool in the Android market. However, Flurry has a good market share too. Even though it looks kind of small, what is interesting even more than the unique share, is when you look into the actually amount of SDKs that are implemented.
A lot of apps will have more than one SDK, and even more than one analytics SDK. When we look into who are the actual winners, we find the winners somewhere in the three to five SDK range. The most downloaded apps out there are the ones that have three to five SDKs. People that install like 11 to 15 SDKs, in my mind, I think they need to go back and think a little bit about their strategy because I’m sure having so many SDKs is actually bloating their code and making the experience of their user much worse.
This, I find, is also a really good example. Segment has done this study—Segment is a provider that does a lot of the things people do to SDKs on the server side which allows them to keep the mobile apps file size way smaller. Then when you look into what happens when your file size grows, you can see the bigger the file size is, the lower the install rate on the mobile phones is actually happening.
So, bigger file size equals lower install rate, equals less business, which means you really want to be careful how many additional SDKs to your current code you want to have installed on your mobile app.
In conclusion, test and observe the network traffic. How many calls are you sending? And if you’re sending so many calls, are you actually draining the battery of your users? Are your users going to like the fact that their battery is being drained? One way to avoid that is you can use batch requests. So instead of constantly having communication with your backend, you could have packaged batch requests that listen to what’s happening on the user’s phone for a while, and then send the requests out. Since the file size also change, is there a way to not double your app size. Maybe speak to your development team. Can they minify libraries? Can they keep libraries out that are usually there only for development reasons?
Also consider data leakage. Everyone’s talking about GDPR right now. So, it’s really important to understand what data are the SDKs that are in your app actually sending and where they are sending this data to.
Another strategy to win is test early and test often. I said that earlier in my presentation. Go back and sit with the developers. Be at the table when requirements are being written, understand what’s happening in your process, obviously test in the QA phase, but also test in the market and use all this insight to feed back into your process and optimization. Things you can do early, even before development: UX testing, usability testing, customer acceptance testing, prototyping.
Then during your development phase, unit testing is something that typically developers do, but also alpha testing. Can you have users that you can show your offer at a very early phase? And by that, learn about the user experience, learn where people break off the experience, learn how you can implement things that will keep you users engaged. Also check your analytics implementation. There’s no way you can learn from your analytics if your analytics are not sending the right data, so make sure your analytics implementation is spotless.
There is something to say about emulator versus test device testing. We’ve talked about how many devices are out there. Some core functionalities you can always test on an emulator and it will get you 80 percent of the way, but then there are things that you can only test on devices. Today there are test labs devices out there that you can connect to. Some of those test labs allow you to use automated tests, pre-recorded tests, so there’s a lot that can be done.
I just read a study that 80 percent of testers out there prefer not using test automation because there is a cost involved in automating your testing. Sometimes it’s more effective and gets you further along if you just have somebody sitting there testing on one device. What I will say though, is having a test is better than having no test, so if you can’t be perfect, at least go ahead and test. Make sure you have crash analytics and operational analytics, like is my API actually available? How quick are calls being answered? Things like that.
During the QA phase, you’re going to do manual testing, you’re going to do test automation. I just mentioned this a little bit. Again, be sure to test your analytics with a tool like ObservePoint, for example. Do some Beta testing. Put it in front of users and make sure as you’re doing the beta testing, that you have enough feedback channels employed that you can actually use that feedback during your QA phase and optimize your release candidate. Again, the discussion test emulator versus test devices, device clouds.
QA is typically a time to do performance testing. So often, things go out and were tested, but only by a few users. So, hitting your API with a lot of requests will help you not have one of those melt downs with your backend, which can happen. When I worked at Deutsche Telekom, we had an app out there and it was to support a soccer game experience back in Germany. They didn’t have enough servers up, and all of a sudden, everybody in the stadium wanted to use that service. Unfortunately, it became unviable. You want to prevent that with performance testing and having enough resources available.
Lastly, in the world that we live in today with a lot of agile development, you want to go ahead and have iteration and maybe have developers stand by so they can optimize and act on the feedback that’s coming in during this phase. Then, in-market, very clearly use your in-app analytics SDKs, your in-app analytics that you’re capturing, analytics from the stores, analytics from third-party sources like AppAnnie, operational analytics, voice of customer. What I said earlier, user feedback and ratings are such a valuable source to learn what’s really going on in market and where you can act upon. With user feedback, because a lot of times this is very qualitative feedback. You need to have a good process in place to identify the urgency and frequency of this feedback so you can act on the right things first.
I already talked a little bit about mobile analytics, so in-app analytics are sending not only what some people call the vanity metrics—your daily active users, your monthly active users—but understanding your engagement cycle, understanding where people click first, where people click second, what is the typical path your user takes in your app, essentially where are your breaking points.
But also look at your success events. Where are you not breaking and how can you learn from that? Often payment experiences are one where your conversion might drop off. Look into are you using best practices in your conversion or is there may be a better payment provider that you could implement within your app? Because it’s a third-party API, are you measuring if something is dropping off there because there is a technical issues? Do you have to write alerts configured for that?
In-store analytics, both iOS and Android—the big stores—give you a lot of information and feedback on how your app’s performing. Also, this feedback can be used for in-store search optimization, which is still one of the most typical paths on how users find new apps within the stores. So, you really want to make sure that you have your search optimization for the app stores in place.
Then operational analytics, this is more of the functional data. Is the API up? Is the API responding? What are response times? How quickly can your system act on the incoming traffic? Things like that.
Voice of customer, I said this earlier, feedback and in-store ratings. When are you collecting the ratings? When you are collecting ratings, are you bugging your customer by asking right away to rate? Or have you identified the moment where your customer is the happiest and now, get a rating from him?
Then real-time versus historical data. Both, I think, have their place and both have their use. Real-time data can help you act quickly, but it is a lot of data, so you might get into a lot of noise. You really want to pick wisely what kind of data you want to look at in real time and what kind of data you want to look into the past and understand and then extrapolate on the future.
Then specifically with real time, what kind of data processing capabilities do you have in-house? What kind of data processing capabilities do your analytics tools have when it comes to predictive, when it comes to some sort of artificial intelligence, and push notifications? Can you get a real-time alert from your real-time data that is predicting some things so that gives you something in your hand that you can act upon?
These are also things where you now combine analytics data with marketing, with targeted marketing, with push notifications, push notification capabilities. Also, a big question: how many push notifications do you implement and which tool are you using in this to incorporate it with your analytics tools and data?
Lastly, kind of along the lines of operational analytics, the technical metrics for crashes, what kind of device are they using, battery performance, we talked about this earlier, and then a whole new topic; location, contextual data.
Why are we talking so much about testing your implementations? Because a defect is so much more expensive when you find it in production and you have to fix it there, versus if you found it during the design, build, or testing phase.
To sum it up, start with the end in mind. Understand your goal. Why are you doing all of this? Why do you even have an app experience? Document that goal. Figure out what your KPIs and metrics are and make sure that across your organization, everybody buys into these KPIs and is driven by the same KPIs and metrics.
Document how you want to measure that. Make a tagging plan, an SDR. Measure, learn and adapt. Keep your documentation up to date. Make sure everybody has access to that documentation. Maybe have it centrally stored and available and in a dynamic way.
With these seven strategies for mobile app analytics testing, I hope you’re going to be able to enjoy your mobile app success in 2018. Thanks so much for your time.