A/B Testing Archives | Leanplum a CleverTap Company https://www.leanplum.com The Multichannel Customer Engagement Platform Thu, 05 May 2022 21:50:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Focus on What Works With A/B Testing  https://www.leanplum.com/blog/focus-on-what-works-with-a-b-testing/ Tue, 03 May 2022 17:01:30 +0000 https://www.leanplum.com/?p=23100 There’s no doubt that A/B testing is paramount to any app’s success.  With data-based insights, app businesses can iterate faster and know that they’re making sound decisions.  And today, there are tools to make A/B testing even easier for you. In turn, you can concentrate on proven strategies for better retention, monetization, and revenue.  You […]

The post Focus on What Works With A/B Testing  appeared first on Leanplum a CleverTap Company.

]]>
There’s no doubt that A/B testing is paramount to any app’s success.  With data-based insights, app businesses can iterate faster and know that they’re making sound decisions.  And today, there are tools to make A/B testing even easier for you.

In turn, you can concentrate on proven strategies for better retention, monetization, and revenue.  You won’t be left spinning your wheels and wasting valuable time and resources on messaging, campaigns, and app features that bring less-than-ideal results.  In the last of our blog series, we discuss the critical role A/B testing can play in enabling your teams do more with less.  

How to test without a data scientist

You may not have a resident data science expert — but what’s good is that you can still run successful tests without one.  Multichannel mobile platforms can help you figure out everything from what tests to run, sample sizes, and calculating results.

Platforms like Leanplum factor in audience size, data points, and elements you’re trying to optimize to run tests.  This includes safeguards to stop testing from going too long.   

These tools can analyze statistical significance among a wide range of metrics.  Using them, you can see where there are notable changes over your starting point.  You can also gain insights into the downstream effect on metrics over time and how those changes influenced your KPIs.

In the end, simplicity is key.  Keep your tests limited to one to three aspects that you’d like to assess and optimize.  At the same time, having a minimal number of tools for setup saves you from having to send your audiences through a bunch of different systems.  

Take the Leanplum product tour

Practical applications and use cases to get rid of the guesswork

You can apply A/B testing to messages and campaigns across your customer journey.  Everything from onboarding, engagement, upgrades, special offers and promotions, and abandoned cart recovery can be tracked, tested, and refined.

A/B testing can also test hypotheses and improve features across the in-app experience.    

Release new features with confidence

When new features are preparing to launch, you can roll them out to a specific audience segment.  These segments can include anything from customers registered in the last three months to a random subset of 10% of your audience.  

During testing, collect feedback from your customers and make changes as needed.  Once you’re sure about the feature, you can make it available to the rest of your users.  

Improve the purchase funnel

Experiment with your paywalls.  Subscription apps can test which features to include behind a paywall for premium users.  At the same time, you can strike a balance with what you offer free users to entice them to upgrade.

Look at different options for how and when to introduce the paywall to maximize subscriptions.  For instance, you can offer users a premium upgrade when the customer takes actions indicating the need for a paid feature.  Or vary the length of trials and subscriptions and compare results.

Mobile retail apps can make product recommendations, while on-demand and mobility apps can look at different use thresholds.  Customer behavior patterns and context need to be factored in as well.

Identify the best price point

Consider trying different coupon values to find the best ROI for different user segments. Testing price points, discounts, and formulas can help you determine which are most effective. Does a discount of 10% or 20% bring more conversions?  And what effects do your messaging and content have?

Download our Data Science report and learn how to 3x game revenue

Real-world examples and results by industry

Let’s look at how app businesses have optimized their customer journeys with A/B testing.

Give users what they need when they need it

A/B testing can help you learn what, when, and how your customers need different types of content.  You might, for instance, see if your onboarding flow gives new users the best starting point for continued engagement.  Check if you’re including the right content and using the best channels for your product promotions.

Find a winning offer for mobile games

GameDuell used A/B testing to identify one offer package that stood out above the rest — at a whopping 87%.  They tested three offers: gems (the game’s currency), gems and boosters (to aid progress), and gems and collectibles (special in-game items).  The offer using only game currency (gems) performed best.  

Subscription apps gain higher retention, LTV, and revenue

Teltech combined the power of CRM and A/B testing to enhance the Robokiller customer journey.  First, they optimized their top-of-funnel activities.  Then, they moved on to testing reactivation, monetization and paywall, and retention.  Teltech came up with solid insights into popular features, points of friction, and product performance.

Fuel higher usage for on-demand and mobility apps

Via used A/B testing to engage users into taking rides more often — bringing 27% more riders back to their app.  They tested lifecycle campaigns to see which ones resulted in customer retention.  Coupling these campaigns with targeting by user attributes and advanced segmentation made it possible to create reactivation campaigns with relevant content and messaging. 

Mobile retail and finance optimize the purchase flow and increase revenue

Tesco wanted to improve monetization and revenue.  They used A/B testing to remove friction and determine ways to get customers to add more items to their carts.  The company tested different Add to Cart button variations to see which worked best.  In the end, their findings led to a 3.3% revenue increase and continued A/B testing to drive sales.

Spend more time on the right things with A/B testing

Your time and resources are limited and valuable.  Why waste what you have on messaging, campaigns, and app elements that don’t deliver results?

A/B testing can guide you and your team to greater success.  Nowadays, technology can make testing faster and much simpler for you without requiring extensive data science expertise. 

So, you can spend time on the initiatives that’ll improve your users’ experience — increasing retention and revenue in the process.

Want to learn more?  Check out these additional resources.

The post Focus on What Works With A/B Testing  appeared first on Leanplum a CleverTap Company.

]]>
Experiment Design: Your Framework to Successful A/B Testing https://www.leanplum.com/blog/experiment-design-ab-testing-framework/ Thu, 09 May 2019 17:46:57 +0000 https://www.leanplum.com/?p=20678 A/B testing — putting two or more versions out in front of users and seeing which impacts your key metrics — is exciting. The ability to make decisions on data that lead to positive business outcomes is what we all want to do. Though when it comes to A/B testing, there is far more than meets […]

The post Experiment Design: Your Framework to Successful A/B Testing appeared first on Leanplum a CleverTap Company.

]]>
A/B testing — putting two or more versions out in front of users and seeing which impacts your key metrics — is exciting. The ability to make decisions on data that lead to positive business outcomes is what we all want to do.

Though when it comes to A/B testing, there is far more than meets the eye. A/B testing is not as simple as it’s advertised, i.e. “change a button from blue to green and see a lift in your favorite metric”.

The unfortunate reality of A/B testing is that in the beginning, most tests are not going to show positive results. Teams that start testing often won’t find any statistically significant changes in the first several tests they run.

Like picking up any new strategy, you need to learn how to crawl before you can learn how to run. To get positive results from A/B testing, you must understand how to run well-designed experiments. This takes time and knowledge, and a few failed experiments along the way.

In this post, I’ll dive into what it takes to design a successful experiment that actually impacts your metrics.

 

Setting Yourself Up for Success
First up: Beyond having the right technology in place, you also need to understand the data you’re collecting, have the business smarts to see where you can drive impact for your app, the creative mind and process to come up with the right solutions, and the engineering capabilities to act on this.

All of this is crucial for success when it comes to designing and running experiments.

Impact through testing does not happen on a single test. It’s an ongoing process that needs a long-term vision and commitment. There are hardly any quick wins or low-hanging fruit when it comes to A/B testing. You need to set yourself up for success, and that means having all those different roles or stakeholders bought into your A/B testing efforts and a solid process to design successful experiments. So, before you get started with A/B testing, you need to have your Campaign Management strategy in place.

When you have this in place, you’re ready to start. So how do you design a good experiment?

Designing an Experiment
The first step: Create the proper framework for experimentation. The goal of experimentation is not simply to find out “which version works better,” but determine the best solution for our users and our business.

In technology, especially in mobile technology, this is an ongoing process. Devices, apps, features, and users change constantly. Therefore, the solutions you’re providing for your users are ever-changing.

Finding the Problem
The basics of experimentation starts — and this may sound cliché — with real problems. It’s hard to fix something that is not broken or is not a significant part of your users’ experience. Problems can be found where you have the opportunity to create value, remove blockers, or create delight.

The starting point of every experiment is a validated pain point. Long before any technical solution, you need to understand the problem you chose to experiment with. Ask yourself:

  • What problems do your users face?
  • What problems does your business face?
  • Why are these problems?
  • What proof do have that shows these are problems? Think surveys, gaps or drops in your funnel, business cost, app reviews, support tickets etc. If you do not have any data to show that something is a problem, it’s probably not the right problem to focus on.

Finding Solutions (Yeah, Multiple)
Once the problem is validated, you can jump to a solution. I won’t lie, quite often you will already have a solution in mind, even before you’ve properly defined the problem. Solutions are fun and exciting. However, push yourself to first understand the problem, as this is crucial to not just find a solution but finding the right solution.

Inexperienced teams often run their first experiments with the first solution they could think of: “This might work, let’s test it.” they say.

But they don’t have a clear decision-making framework in place. Often, these quick tests don’t yield positive results. Stakeholders in the business lose trust in the process and it becomes harder to convince your colleagues that testing is a valuable practice.

My framework goes as follows.

  1. Brainstorm a handful of potential solutions. Not just variants — completely different ways to solve the problem for your users within your product.
  2. Out of this list of eight, grab two-to-three solutions that you’ll mark as “most promising.” These can be based on gut feeling, technically feasible, time/resources, or data.
  3. Now for these two most likely solutions, find up to four variants for each of these solutions.

This process takes you from the one-set solution you started with to test against the control, to a range of about 10 solutions and variations that can help you bring positive results. In an hour of work, you increase your chances to create a winning experiment significantly.

Now you have your solutions, we’re almost ready to start the experiment. But first…

Defining Success
We now have a problem and have a set of solutions with different variants. This means we have an expected outcome. What are we expecting to happen when we run the test and look at the results?

Before you launch your test, you need to define upfront what success will look like. Most successful teams have something that looks like this:

  • Primarily decision-making metric: The primary decision-making metric is the goal metric that you want to impact with your test. It’s the single most important user behavior you want to improve.
  • Secondary decision-making metrics: These are often two-to-three metrics. They are directly impacted by the experiment, but aren’t the most important metric. The secondary metrics create context for the primary decision-making metric, and help us make the right decisions. Even if the primary metric is positive, but there is too much of a decline in the secondary metrics, this could impact your decision if the experiment was a success or not.
  • Monitoring metric: These are extremely important. You don’t use them to make a decision on the success of the outcome of the experiment, but on the health of the environment of the experiment.

With an A/B test, we want to have a controlled environment where we can decide if the variant we created has a positive outcome. Therefore, we need monitoring metrics to ensure the environment of our experiment is healthy. This could be acquisition data, app crash data, version control, and even external press coverage.

Setting the Minimum Success Criteria
Alongside the predefined metrics on which you’ll measure the success of your experiment, you need a clear minimum success criteria. This means setting a defined uplift that you consider successful. Is an increase of 10 percent or 0.5 percent needed to be satisfied about the problem we’re trying to solve?

Since the goal of running an experiment is to make a decision, this criteria is essential to define. As humans, we’re always easily persuaded. If we don’t define upfront what success looks like, we may be too easily satisfied.

For example: If you run a test and see a two percent increase on your primary decision-making metric, is that result good enough? If you did not define a success criteria upfront, you might make the decision that this is okay and roll out the variant to the full audience.

However, as we have many different solutions still on the backlog, we have the opportunity to continue our experimentation and find the best solution for the problem. Success criteria help you to stay honest and ensure you find the best solution for your users and your business.

Share Learnings With Your Team
Finally, share your learnings. Be mindful here that sometimes learnings come from a combination of experiments where you optimized toward the best solution.

When you share your learnings internally, make sure that you document them well and share with the full context — how you defined and validated your problem, decided on your solution, and chose your metrics.

My advice would be to find a standard template that you can easily fill out and share internally. Personally, I like to keep an experiment tracker. This allows you to document every step and share the positive outcomes and learnings.

Creating a Mobile A/B Testing Framework That Lasts
All this is a lot of work — and it’s not always easy. Setting up your framework for experimentation will take trial, error, education, and time! But it’s worth it. If you skip any of the above steps and your experiment fails, you do not know where or why it failed and you are basically guessing again. We all know the notion of “Move fast and break things,” but spending a day extra to set up a proper test that gives the right results and is part of a bigger plan is absolutely worth it.

And don’t worry, you’ll still break plenty of things. Most experiments are failures and that is fine. It’s ok to impact a metric badly with an experiment. Breaking things mean that you’re learning and touching a valuable part of the app. This is the whole reason why you run an experiment, to see if something works better. Sometimes that is not the case… As long as you have well-defined experiment framework, you can justify why this happened and you can set-up a follow-up experiment that will help you find a positive outcome.

Leanplum is a mobile engagement platform that helps forward-looking brands like Grab, IMVU, and Tesco meet the real-time needs of their customers. Schedule your personalized demo here.

The post Experiment Design: Your Framework to Successful A/B Testing appeared first on Leanplum a CleverTap Company.

]]>