10 Tips to Improve Mobile App Analytics
How to connect your core KPI metrics with AB testing and build high performant successful apps on mobile.
Data is a tool that can help you make decisions about your app features, where to invest, and what sunset. Successful companies will often set up an AB testing for every change to their app and evaluate the test groups against their core metrics. Luckily today, we have a verity of free and powerful analytics tools, like Firebase, Facebook, Tapjoy(Good analytics but no docs at all), and others. Those can help you building this setup. The only thing you should be taking care of is picking up the right core metrics for your application. The below 10 tips to Improve Mobile App Analytics will help you to do just that. So let’s get started.
Core metrics are more than just a list of data points, you need to use the right tools for each metric, and there are quite a few to pick from. So let’s start by walking over the core analytics tools that you will need.
User interaction with your app, like clicks, registration, navigation, etc. Usually, the event will include one or several metrics. Ex: user like action, you want to know how many likes you have a day and the content users like.
A funnel is a collection of user steps required to complete a task. It helps you to understand how successful were the users on each step. A good funnel will show you the aggregation of user actions for each step. Ex: A two steps login flow. First, the user chooses the logging method to be Email/Google/Facebook/Exit App. Finally, the user needs to accept permissions, so the actions could be: Accept/Decline/Exit App.
Shows you how groups of users behave over time. Ex: Retention, it measures how many times users opened the app after the first install. It can tell you that 60% opened the app after a day, 50% opened the app after a week, and 30% open the app after a month.
Core metrics is the tool you are using to evaluate your experiments’ success and decide which of the test group variants to use for your product.
The average app on Google play loses 77% of its DAUs within the first 3 days after the install. Improving your retention by 1% affects your bottom line by around 7%. It means that user engagement is important. In this section, you will find the core metrics to measure your user engagement efficiently.
An event with a number of the unique first-time install or first-time app launch events. This metric becomes even more powerful when you provide the install source as well. Both the Play Store and App Store allow you to set up an AB testing for your app icon and the app page. You can then compare the App Downloads metrics. In Q1 of 2020, Google Play Store and the Apple App Store reported a combined 33.6 billion app downloads, with the average smartphone user having 40 apps installed on their mobile phone.
Active Users & the Stickiness Ratio
It is measured using the open-app event. It is aggregated by a day, a week, or a month to provide daily, weekly, or monthly active users, respectively. The ratio between daily active users to monthly active users is called the stickiness ratio. The higher your stickiness ratio is, the higher your users’ chances to recommend your app to their friends.
An event to measure the median session length time of a user. For this metric, it is also interesting to see the top 75 or 90 percentile, to see just how skewed your users are. If you have high skewness levels in your app, it means that you have several very different user groups, those could be men and women or different groups of age. By finding who those groups are, you can start building special features or promos for each of those groups.
The session definition varies between applications and between analytic platforms. Not every open-app event is considered as a new session. The average time interval between the sessions is a good indication of how frequently your app is being used. Push notifications can affect this metric a lot.
A cohort of how many users returned to the app after the first visit. This is one of the most important metrics of your app as it usually affects all the other metrics as well. Based on Statista, 32% of the users returned to the app 11 times or more during 2019.
This section covers the metrics that will help you to understand users’ feedback to your app. How likely a user to share your app with their friend etc.
This is a good numeric event that tells you what users think about your app. It’s a good idea to have an additional in-app review system before sending the users to Play Store or App Store. Usually, you will get a higher response rate for the in-app review system. It’s also a nice trick to promote store reviews for users who gave you a good in-app review.
A text review event. Those are hard to analyze. Luckily, you can use many tools to get help with it, like Appbot, for example. Also, the Play Store got built-in review analytic tools that can provide you similar reports to Appbot. Categorized user reviews can show you how your app performs in different areas, like stability, connectivity, and specific features.
There is a strong connection between user retention and app performance. It means that performance affects most, if not all, of your app metrics. Be sure to check the 10 tips to improve mobile app performances as well.
Startup time is an event measuring the time between the start of your app process until the first screen finished loading. It’s called a cold startup time.
Many tools can help you capture the crashes in your app, my favorite is Crashalitics, but the most important thing you can do is to allow your app to crash. During the development, it is common to build the app in a “safe” way to capture as many crashes as possible. This approach can lead to hidden bugs that are much harder to spot compared to crashes.
It’s a good idea to capture the time of each of your API calls on the client and server sides. When building big features, it is always important to make sure you are not hearting performance.
There are multiple flows in your app where you can look at user conversion. When running experiments, it is interesting to see how those funnels are effected.
Push notifications work well when they provide a user with interesting information, like “You got a new message from Ilya” or, “Your sword was sold!”. You can measure this impact by calculating the CTR of each of your push notifications. It’s a two-step funnel of “notification was shown,” “notification was engaged.”
Usually, the analytics platforms will provide some built-in tools to measure the performance of your purchases and subscriptions, but I don’t recommend using it. From my experience, the funnels work better for this purpose, as you can build a clean flow, from engagement to the final order. It will make it easier to set up experiments and understand the user behavior with your flows.
There are many restrictions on using ads in your app, but there are no restrictions about measuring Ads CTR independently from your ad provider. Once you have this data, it adds another value to your experiments as you can estimate the ad revenue impact of each of the test groups.
How much money do you make from a user, from the moment the user starts using your app? To make this calculation simple, have a single event with a numeric value representing the US Dollar revenue. Fire this event for ads as well. Most of the ads providers will provide you this information with their API.
Tips to improve mobile app analytics
Use the right tools for your app. As I stated earlier, there are many analytical tools to choose from, make sure you are selecting the one that will allow you to run the AB testing that you need to have a full picture of the results.
Use meaningful features and analytics. Don’t wait to sunset a feature that doesn’t work. If a metric is not giving you the data you intended to collect or a feature is not bringing you the results you expected, you will be wise to shut them down. It will both improve your performance and simplify your analytical and development cycles.
Trust your instincts. Data can, indeed many times, surprise us. But what is even more common are bugs in our data. Always double check your results, try to verify your data with multiple inputs. If your instinct tells you that something isn’t right, it’s perhaps because you do not see the full picture. Seek for more inputs. You can always improve your trust in the data results.
Integrate analytics into app development. It’s important to have analytics at the core of your product development cycle. If you can’t test it, how can you trust it? During the planning of your features, have a dedicated space for analytics that will plan how the AB test experiments will be set up.
Use Hold-Out Groups. An experiment can show good results in the short run and reduce some other important metrics. It’s a good idea to have a hold-out group for whom the feature will not be released for a long time. It will allow you to better measure the experiment’s success in the long run and sometimes its failure.
Avoid local wins. It is sometimes compelling to release a feature to only a small population where it worked—for example, a country or a gender. From my experience, it doesn’t worth the effort. The amount of resources required to maintain several versions of your app for each population is just too expensive. It requires different planning and experiments, and the complexity grows exponentially. Be efficient, and don’t keep features that don’t work for everyone.
Keep it stupid simple. If you can’t find a simple way to measure a feature, don’t measure it at all, rely on your other metrics instead. Sometimes we wish to make data connections in our app that require high maintenance. Those will often break when we update our product, and we end up with misleading data. Keep your analytics simple, don’t let it impact your product too much, or you are at risk of becoming inefficient.
Use benchmarks. Benchmarks are a great way to understand your potential and know what you can expect from your product. So checkout out what others did and see if you can blend it into your data expectations.
Be mindful of sampling. It is unavoidable to use sampling when AB testing your app. But keep in mind that sampling is only giving hints about the rest of your app population, and you may run into surprises when making a release.
Hire a mobile analytics expert to help you set up, someone like me. There are many challenges in the mobile world; make sure you got the right tools to make the right decisions.