Category: Product| Solution: DataMa Compare|Impact|Assess|Journey, Analytical Solutions|
Type : A/B Test | Client: Commodities
Tags: #ProductPerformance #GoogleAnalytics #Conversion #ABTest
Context: A/B testing for a lead generation funnel
The client is a major commodity player trying to optimize the way users interact with their website to generate more leads. The client has planned an important lead generation funnel reshape, and used DataMa professional services to help with an in-depth A/B test analysis.
To review A/B testing fundamentals, companies should consistently optimize their websites, but they need to ensure that any changes will make a beneficial impact. For this reason, launching A/B test campaigns is essential. Data analysis is then critical to address the following points:
- Understand if the results of the A/B tests are significant, with the chosen degree of precision
- Check that the distribution of A and B versions are evenly distributed across all your user segments, to avoid inserting bias into the results
- Determine whether the changes have affected any particular user segment(s)
- In the case of a major change, analyze how user behavior has changed between the two versions and thus guide future product development
A/B tests include a multitude of possible tests, from a simple change of a single CTA (Call To Action), to the complete redesign of a registration, lead generation or purchase tunnel.
Taking the latter case, the redesign of a lead generation tunnel, the first step will be to size the A/B test, which is basically an equation with four variables: the duration of the A/B test, the volume of users, the variation between the two versions, and the degree of significance.
Before the test, it is useful to know how long to run the test before choosing which version performs better. To do this, the other three variables must be fixed: the average volume of users on the platform is known, the degree of significance is commonly fixed at 95%, and the hypothesis of the variation that we wish to have must be made in order to determine the duration of the A/B test (in other words, the period the A/B test is active for the results to be significant).
Approach: Using all Solutions in DataMa to provide an in-depth analysis
To simplify the analysis and make it easily understandable for stakeholders, the analytics team created a market equation using key performance indicators (KPIs) that are the standard in Web Analytics, such as number of leads and number of users reaching step X of the funnel.
The resulting market equation looks like a classic funnel:
The data for this use case came from Google Analytics, aggregated in a BigQuery table. The necessary metrics allow us to follow user behavior over the lead generation funnel. Dimensions, such as product category, device, traffic source, and country, are included to properly identify which segment the variations come from.
Here is an example (anonymized) dataset
For the Customer Journey Analysis, a specific use case needs to be done with a column ‘Journey’, including the path of users (for instance: “Home-Step1-Step2-Step3”)
Here is an example (anonymized) dataset for Journey
Confirming the random distribution of traffic
The aim is to identify whether the traffic is well distributed between versions A and B, on all segments, to avoid bias in the analysis. How can we do this? There are sometimes exceptions that are made, for example, people who are going to access the site via email might have a redirect that is systematically sent to version A, so you need to exclude this segment from the analysis to interpret the results. In DataMa, finding these inconsistencies is very simple thanks to the “Simple Test Matrix” table where you can look at the degree of correlation in all dimensions between each other:
The aim is to have a level of correlation between the version dimension and the other dimensions close to 0.
Validating the significance of the test
A question that frequently arises during an A/B test is when to stop the test, which is to say, when the results are significant enough to keep version A or B. It is important to keep in mind the theoretical duration chosen before the A/B test and the operational constraints (see our research article on this topic).
With DataMa Assess, the client had direct access to this answer in a very simple way in the “Detail view” graph, using appropriate statistical tests (Bayesian, Frequentist or Bootstrap), and level of significance (usually fixed at 95%).
In the platform, using the cumulative significance view, the client was able to see from which date the test started to be significant:
Identifying sources of improvement/regression
Once the best version is identified, it remains to be seen on which segments this version has performed best, and whether this corresponds with the starting point. For example, in this use case, the lead generation tunnel was redesigned to improve mobile performance, so we want the conversion rate on these devices to be greatly increased, whereas on the desktop we expect the redesign to have no negative effect. The analyses show that this is not the case: we can see that the funnel redesign actually led to statistically significant lower performance on all device types.
Thanks to the Waterfall graph on DataMa Compare, the client was also able to identify the step where they saw gains, and for which segments: We can see that the redesign has had an impact on the transition from stage 1 to 2 in all segments.
Evolution of new user behavior
Finally, because the change between the two versions was leading to a major change in user behavior, the client analyzed the impact of the changes between the two versions on user Journey. DataMa Journey provides access to a view highlighting the changes in behavior between two versions, using a sunburst view:
Using the DataMa platform and DataMa professional services, the client was able to quickly interpret the results of their A/B test and know precisely when and which version to roll out, thanks to in-depth analysis on where the funnel was working and what were the next areas of improvement.
The time to complete this complex analysis was a large improvement on the traditional “manual” approach, which usually requires both data engineers, data scientists, and data analysts all working in the same room for multiple days. DataMa made it a 1-day project once the data was collected.
To test DataMa solutions: https://app.datama.io/demo