- Metric: Conversion Rate
- Statistics: Z-Test
- Tails: 2
- Confidence: 95%
- Power: 80%
- SRM Confidence: 99%
Conversion Rate Calculator
The most complete metric is average revenue per visitor, but the simplest and most used one is definitely good ol’ conversion rate, which contributed to the rise of the fabulous field of conversion rate optimization a.k.a. CRO.
It’s simply the number of conversions divided by the number of unique visitors. It’s also a metric that is easy to track and run stats on. It is more appropriate for sites that have low traffic volumes, as the variance of the data points in question is usually lower than revenue data, hence conversion rate tests usually complete a lot sooner!
This section allows you to compute both pre-tests MDEs (Minimum Detectable Effects) to prepare for your test, and post hoc analysis of test results.
To run a,
- Pre-test (test planning): Just enter your weekly values for visitors and conversions for the control.
- Post-test (post hoc analysis): You need to include the values for all of your variants including control, for the duration of your test (and if the test has not yet completed, just put in the numbers up to the present moment).
How to calculate the conversion rate of a website?
Calculating the conversion rate of a website can be done in many layers. You can either calculate the conversion rate of your entire online property, spanning all your pages. Or you can zero in on a particular page or blog and then assess its conversion potential.
- In order to calculate the conversion rate of your entire website, set up a segment on Google Analytics that bypasses traffic from subdomains and focuses only on your main domain. Then set up a conversion goal - either destination or custom event - for the most important action on your site. This can be a form submit, a paid transaction or even a blog view. Once these two aspects are in place, you can simply divide the total goals triggered over a period of time by the total traffic to your domain.
- In order to calculate the conversion rate of an individual page, you can leverage the analytics of a platform like LanderApp or Unbounce. They have the ability to directly display the desired number of actions (pre-defined) divided by the total number of traffic, over a given period of time. But in case you are using a WordPress website, you can dig into Google Analytics.
Instead of simply taking the total number of events fired over a time period, take into account the events that are fired for a particular page. You can train your Google Tag Manager to send this data to your analytics account when setting up the tag.
What are some problems that marketers may face while calculating conversion rate metrics?
Conversion rate is a straightforward metric. Most marketers do not struggle with calculating the conversion rate.
But there can be technological constraints that hold them back:
- If the action that contributes towards calculating conversion rate - like say a Free Trial for a SaaS website - is from a call-to-action button that’s spread out across an entire site (in the header or footer), unless the tag in Google Tag Manager is trained to send the URL on which the goal has been fired, calculating the conversion rate of individual pages might be a hassle.
- In most cases, privacy compliance laws call for cookie consent. This applies to the entire analytics stack of marketers and results in capturing only partial data on goals and actions. The resulting conversion rate is lower than reality, especially if the traffic count stays the same and is accurately logged.
- Traffic spikes due to bots and automated page views can skew conversion rates. The fix is to ensure your analytics account understands how to spot and discount these unwanted guests.
What is statistical significance in A/B testing?
The statistical significance in A/B testing is when the p-value becomes inferior to our significance threshold. It signifies that our null hypothesis is highly unlikely to be true, hence it proves that we have the effects we are observing are not due to random chance.
Let us assume you’re running an A/B test on two landing pages selling the same product. The first one (we’ll call it A, or the control) does not have 3D product images. The second one (we’ll call it B, or the variation) has them.
The conversion rate in terms of “Add to Cart” is 7% for A and 9% for B. So should you just add 3D images to your product pages across the site?
No! Because all the visitors to your website have not seen page B and you can’t make assumptions about their preferences simply from observing the behavior of a much smaller sample size. Right? (PS: Don’t make assumptions in marketing or optimization… being data-driven is the way to go).
How to solve this little problem?
P-value comes to the rescue, as it gives you the probability that you have seen a 2 percentage point increase in the “Add to Cart” KPI for your variation (Page B) or a more extreme result if the null hypothesis (that there is really no change) was true. If it is inferior to or less than the “risk” you are willing to accept - the standardized value of this being 5% (the significance level) - then you can be reasonably confident of your test (at your chosen level of confidence, in that case, 100% - 5% significance level = 95% confidence level).
What is the confidence level of an A/B test? How does it differ from significance?
Both the confidence level and the significance level are two sides of the same coin!
Hence if we have a confidence level of 95%, it entails a significance level of 1 - 95% = 5% = 0.05. We can then use the significance level value to compare against our p-value. If our p-value < 0.05, we say that our A/B test has reached “statistical significance”, with a confidence level of 95%.
What is the power of an A/B test?
The power of an A/B test is a measure of how likely we are to observe an effect if it is present. Usually, we want it to be at least 80%. The higher the better, it assures us that we are not missing anything!
It plays an important role in sample size calculations because, along with the MDE (minimum detectable effect) and the confidence level, it allows us to determine the required sample size to wait before analyzing a test.
What is a good conversion rate for a website?