You have questions about CRO. We have the answers. Scroll to learn all about Conversion Rate Optimization from one of the most reliable A/B testing tools in the market.
Learn MoreConversion Rate Optimization (popularly known as CRO) is a discipline that uses data-driven frameworks of evaluation, analysis, and frequently A/B testing, to remove obstacles from an online property — like a landing page, a form, or even an e-store — so that more of the visitors who come to these assets end up taking the desired action.
When a prospect takes the desired action, they are considered to be “converted”. Some examples of goals that can be improved through Conversion Optimization drives are:
How conversion rate optimization works is it attempts to plug the potential revenue/conversion leaks on a site to increase its profitability, as opposed to driving more visitors to a site that has unresolved resistance points.
While simply increasing ad spend and bringing more people to an unoptimized page may seem quicker and easier, Conversion Rate Optimization (or CRO) offers a higher return on investment over time because improving the site experience can lead to customers spending up to 140% more on a brand.
Moreover, there is an interesting correlation between optimizing a site for conversions and the revenue generated by it. Research at Forrester has found that some brands display an exponential relationship between improving CX and revenue. These companies should focus on offering exceptional customer experiences on an ongoing basis — even to those who are already pretty happy.
When initial pain points and instances of poor site design are eliminated, the immediate impact is negligible. However, with time, the cumulative growth is significant and even small changes can lead to noticeable upticks in conversions, and thus profits.
If you are contemplating investing in Conversion Rate Optimization, the journey will be interesting, challenging, and rewarding.
Before we dive into the definition of conversion rate, let us refresh the concept of conversion. When as a marketer you build a landing page or write an email, you have a goal in mind. An intention that sets the tone for the copy, the design, and the CTAs.
When someone takes this desired action — they convert. And are logged in your analytics systems (Google Analytics/email analytics/landing page analytics) as a “conversion”.
The good news is you have the flexibility to define any desired action or end goal as a conversion.
Conversion rate is the ratio between the total number of unique visits with successful conversions and the total number of visitors to the page or the site (irrespective of whether they take conversion actions or not).
Like most metrics, the conversion rate on its own can’t tell you much about the performance of your landing pages, your emails, or other assets. You need two additional data points to make complete sense:
If your landing page out-performs every other page in the space, yet you are still shelling out more money to convert prospects than you make from their conversions, you will not be profitable.
Thus, conversion rate together with baseline numbers and cost of acquisition can guide you toward changes you need to make and where you can get started for maximum impact.
Conversions also benefit from a back-up plan.
While the concept of conversion rate is simple in terms of math, there is a lot to unpack here.
Conversion rates can be further sliced and diced to focus on the conversion percentage for particular user segments.
For example, a recent E-commerce benchmark report by Smart Insights reveals how online stores perform for micro-segments like users who visit the sites from Google versus visitors who find the site through PPC (source-based segmentation). You can also explore performance by device and grouping of marketing channels — omni-channel vs. single-channel.
Conversions can further be classified as macro and micro conversions.
These are conversions for the primary goal of the site, the landing page, or the email. These conversions justify the effort invested in the optimization drive. For an e-store, this would be purchases completed. Or transactions made.
These are conversions for the supporting goal. So why do you need micro-conversions? They are a reporting back-up plan. And a way to further tune out the noise from performance data. Often, despite best efforts, there might not be a positive change in the macro conversion rate.
Under such circumstances, a positive shift in the micro-conversion rate (especially if the micro goals are supporting the macro-goal) can signal that the optimization efforts are at least in the right direction.
Conversion Rate Optimization can benefit greatly from a clear understanding of Leading Metrics and Lagging Metrics for the improvement you are trying to make.
Leading Metrics can be impacted through action or effort. Lagging metrics simply measure or evaluate the output of the Leading Metrics.
What’s Lagging for a particular optimization project can in fact be Leading for future experimentation drives.
If you are trying to improve how your site forms convert, drop-offs between screens is a Leading Metric. But the actual Abandonment Rate for the form is a Lagging Metric.
Google Analytics is the weapon of choice for most marketers. Here is a breakdown of how you can start tracking conversion rates for your website and landing pages.
A/B testing focuses on optimizing a site for conversion rates.
There is considerable emphasis on goals. But not all A/B testing tools are made equal. Some offer only bare-bones goal tracking in low tier plans, reserving the custom goal templates for higher-paying users.
But this defeats the purpose of conversion rate optimization… because, at the end of the day, it is about conversion rates.
Convert Experiences — for example — offers 9 goal templates. Plus 40+ conditions that can be stacked to define advanced goals. This rich variety allows for macro and micro conversion rate tracking, all from the experiment summary dashboard.
According to WordStream, the average conversion rate of a website is a low 2.5%. What you invest in various marketing drives to bring people to your online brand and presence fails to deliver returns for almost 97% of the footfall.
When you look at the cost, the effort expended, and the output, it just makes sense to fix what is off with your site, your landing page, or your other assets, before scaling lead generation.
Conversion Rate Optimization is an exploratory process. No conversion strategy guarantees a win.
In a rapidly changing, complex ecosystem that is your brand, as with everything else, CRO also involves three key steps — Discovery, Experimentation, & Learning — for long-term and consistent improvement.
In short, you have an idea of what might be wrong and what might work instead, based on quality data. But the actual right solution can only “emerge” from your experiments.
The first stage is Discovery.
As the name suggests, this is the time to suspend judgment, ego, what you think you know is right, and let data point the way.
An analysis or a probe is as good as the data underlying it. So make sure you collect clean and quality data and allow it to tell you the story.
There are 5 different types of data your discovery phase may yield, but for the sake of simplicity, they can be clubbed under the banners of Qualitative and Quantitative.
QUANTITATIVE DATA IN CRO | QUALITATIVE DATA IN CRO |
---|---|
Start here. This is because quantitative data leaves little room for misinterpretation. If you are starting out with conversion optimization, having concrete stakes in the ground as “facts” is extremely important. | Once the quantitative data gathering is done, you can shift your focus to the qualitative realm. This is still better than going by gut instinct or the dreaded HiPPO. However, qualitative data asks you to interpret the findings. And there is a chance for bias to creep in. |
Quantitative data is a precise, unequivocal metric that can be pulled from an analytics engine like Google Analytics, Mixpanel, or Amplitude. Some examples of quantitative data are Pages viewed/session, the bounce rate of a page, or the total number of conversions. | Qualitative data is generalized. It is subjective. When you conduct focus group interviews, when you place surveys and polls on your site, when you look at user session recordings — you are collecting qualitative data. Hotjar is one of the most popular qualitative data collection tools out there. |
Quantitative data tells you the “WHAT”. What aspect of your site or your landing page is broken? If the bounce rate is particularly high for a blog page but the scroll depth is decent — there is something off with the way that page is structured. | Qualitative data tells you the “WHY?” Why is the aspect signaled by quantitative data broken? Building on the example of the blog page, you may launch a poll on it to understand why people are bouncing away — despite finding the content engaging. |
A blog should ideally invite readers to read more and explore the site. Visitors should not be bouncing away as soon as they get the “answer” they were seeking through the content. | The answer you get will not be definitive. Some visitors may indicate that they are students and have no intention of learning more about the brand. They are content with their “answer”. Others may point to poor site linking and the inability to find other interesting pages to peruse. |
You can’t argue with quantitative data. If data collection is done well, these numbers are undisputed. | You can and should argue with qualitative data. This type of output encourages you to brainstorm, to consider possibilities that were completely off the table. |
Heuristics are mental shortcuts that human beings use to make snap decisions. It is looking at past data and trends to identify a good enough course of action to take.
If there were no heuristics, we would forever be stuck in trivial decision-making and squander our mental bandwidth.
So what exactly is a heuristic analysis in Conversion Rate Optimization and how can you leverage it in the Discovery phase?
Heuristic Analysis is taking the lay of your conversion land.
It is often used interchangeably with usability analysis, but the two are different, as is laid out here.
Typically in heuristic analysis, the CRO team identifies a list of 5 to 10 key “tasks” a visitor needs to complete on the website (or the landing page) and undertakes them.
Along the way, experts on the team compare the reality with UX best practices and score how detrimental the issues may be to the end goal of achieving a conversion.
This process is relatively quick and simple. It relies on the expertise and experience of the CRO team and can often point out glaring gaps right away.
Things that are broken and must be fixed.
You might say that Heuristic Analysis scores how your site stacks up against the wisdom of thought leaders and industry experts.
But that does not mean there can’t be a framework around it.
One of the best ones in recent times comes from CXL.com. It focuses on the following factors:
The bond that exists between the pre-click experience and the corresponding post-click landing experience. Conversion coupling means maintaining consistent messaging throughout your campaign.
Unbounce
Our Premium Agency Partner, Conversion Rate Experts, has compiled an in-depth guide on site elements that signal credibility. It will help you go beyond reviews & testimonials to explore the possibility of other trust indicators your competitors are missing out on.
The MECLABS Landing Page optimization formula is C = 4M +3V + 2(I-F) – 2A.
where
C = Probability of Conversion
M = Motivation
V = Value Proposition
I = Incentive
F = Friction
A = Anxiety
A good, in-depth heuristic analysis would look at accepted best practices for all of the elements.
Optimizing for after the conversion is equally valuable. It improves the UX of the entire buyer’s journey and ultimately increases the Lifetime Value of the customer.
OUTPUT OF HEURISTIC ANALYSIS:
● Red Flags
● Elements to Fix Right Away
We have been introduced to Quantitative data.
As a marketer, you are already well versed in the platforms that collect these data sets. The skill that you need to acquire is looking at the data from the point of view of an optimizer.
So how can you begin quantitative analysis for conversion rate optimization?
With time though, you should develop the habit of tracking key metrics throughout the year, and then it’s a question of crunching your collected data to allow patterns and improvement opportunities to emerge.
Google Analytics can offer both quantitative and qualitative data. But let’s focus on some great reports and hacks that can enrich your drive with quantitative data to optimize your website.
Content Marketing — in a world where millions, if not billions of articles, infographics and blogs exist, creating more assets to add to the noise hardly makes sense.
Unless you can tweak your process, learn from the misses, and reliably scale the hits. Conversion Rate Optimization is thankfully a principle that applies to your content as well.
CRO in content marketing focuses on a few things:
Conversion Optimization for Blogs: Understand the Value of Your Content Marketing?
You can do this in a couple of ways:
Page Value is the average value of a page that users visit in a session, before completing a conversion. Let’s unpack the implications first.
Filter the All Pages report by the URL of the particular blog or page you want to evaluate.
Once you have a clear understanding of how blogs or content, in general, is performing, you can begin to segment articles by writers or topic to zero in on things that are working for your content strategy in general.
If a blog draws in lots of traffic but is not supporting conversions or has a high abandonment rate for the goal of interest, then you need to focus on finding ways to prove the value of the action you wish readers to take.
Maybe the CTA is not strong enough. Maybe a new heading is needed. And with that, you are already well on your way to optimizing content.
If the content is the net you cast to get on the radar of your prospects, funnels draw them in and qualify them to close.
A business funnel is a well-thought-out sequence of touchpoints that provides value to your ideal customers and motivates them to move from one stage of their buyer’s journey to the next.
A lot needs to align for a funnel to work.
Funnel optimization is a step-by-step process. Here are a handful of data sets from Google Analytics to guide you in the right direction.
The combination of goals, events, and custom segments can allow marketers to get good baseline quantitative data to begin form optimization.
Here is a step-by-step process to gather data to optimize the forms on your website:
The information you collect over the four steps is a patchwork and the gaps can be filled in through qualitative analysis.
Of course, if you are using dedicated form analytics tools like Zuko, Woopra, or ClickTale, you will be able to mine more precise insights.
In the latest State of Web Analytics report by HotJar, the biggest issue optimizers face with mostly quantitative data is why customers behave the way they do on websites.
This is followed by the overwhelm of the data options and filters.
As already discussed, qualitative analysis mitigates these issues to a certain extent.
Qualitative data is different from quantitative data in two ways:
If qualitative data and the insights you extract from it point you in the direction of something broken on your website or in your funnels, qualitative data can reveal why it is broken and help come up with a solution.
Love infographics?
Visuals convey vast volumes of data in an easy to grasp way. Heat maps leverage the same principle, allowing conversion-focused marketers to understand how users interact with the elements on a page (including the copy) by tracking the aggregate of their clicks, scrolls, and mouse pauses.
Tools like HotJar have become synonymous with heat maps. Heat maps are useful and they lower the barrier of entry into the complex and subjective world of qualitative data gathering and analysis.
A heat map works in the background, quietly collecting data and presenting it in a format that’s easy to interpret. This is one qualitative data channel that is almost as definitive as quantitative metrics.
Here is an example of a typical heat map:
Heat maps provide aggregated data.
They reveal patterns of engagement among your site visitors from the sample that is tracked.
USING HEAT MAP DATA:
The rule of thumb is to:
The main reason why people working with data often feel overwhelmed is the lack of context. You’ve got the numbers and you can turn them into complex reports and visualizations but what does it all mean and how do you make it actionable?
Silver Ringvee, Chief Technology Officer @ CXL Agency (via HotJar report)
User testing or user research is defined as tasking real people with defined (specific) actions or jobs they must complete on your website in a moderated (observed) or unmoderated (unobserved) setting.
Most of the high ticket, elite companies in the world invest in user testing in some form or the other.
In a typical user testing experiment, the following happens:
USING USER TESTING DATA:
What’s better than trying to guess why your site visitors won’t convert?
Asking them!
It sounds extremely simple and potent. On-site polls give marketers and optimizers an easy and relatively inexpensive way to pop important questions to their audience.
But the success of the exercise hinges on two key aspects:
To begin conversion rate optimization with on-site polls, you first need to have an idea of what is broken on your page or in your funnel. The answers you receive from your actual site users can then inform you of the “WHY” behind your assumption.
Remember that even with the “WHY”, you are still in the realm of speculation and should test any proposed changes before rolling them out across the site.
*Is our pricing easy to understand for you?
*Did the Features page answer your questions about what we do?
*What is keeping you from browsing more products?
*Do you think what we do (insert your service) is a good solution for your problem (insert specifics)?
*What is keeping you from buying a particular product?
*Would you recommend this Add to Cart experience to your friends?
*What is keeping you from completing the trial sign-up?
*Is it clear that the purchase comes with free shipping?
*What made you navigate to this page?
*What made you share the article on Facebook?
*What did you like the best about our virtual storefront display?
USING ON-SITE SURVEY DATA
Chat queries and support tickets may not directly answer your most pressing conversion rate optimization queries, but they can do two things:
How to Optimize Your Site with Chat Data?
Everything that we have learned about Conversion Rate Optimization has been in preparation for this stage — Experimentation.
However, before we delve into this section, it is important to understand the distinction between three terms that are used almost interchangeably by marketers and testers. They are:
Customer Experience Experimentation consultant Nick So has a unique take on the differences.
By definition, an experiment is a procedure carried out to support, refute, or validate a hypothesis. From a business standpoint, experimentation is a methodology and mindset an organization can (and should) use to answer questions and guide decisions.
Nick So, via LinkedIn.
Experimentation is the WHAT.
A/B testing is simply a method of validating a hypothesis in an experiment. Because it is a Randomized Controlled Trial, it is often the most robust and objective method of validation for online businesses.
A/B testing is the HOW.
Conversion Rate Optimization (CRO) is the WHY. CRO utilizes various forms of research to develop ideas that aim to improve website conversions. In an experimentation-driven organization, these ideas are translated into hypotheses to be supported, refuted, or validated (the WHAT) typically using A/B testing (the HOW).
If you’re a fan of Simon Sinek, you already know that the WHY is the heart of any meaningful drive.
Following that logic, Conversion Rate Optimization is a desire to discover the why behind visitor actions on a site and then leverage the power of A/B testing with the rigor of an experimentation mindset to consistently improve the outcomes.
The word “rigor” is of the utmost importance when conducting experiments.
If you’re not careful, the true essence of experimentation is lost and it devolves into lip-service to data analytics and A/B testing.
Experimentation isn’t a one-time task.
It is a long-haul commitment. It requires a deference to the insights derived from data and from pitching assumptions against the truth (revealed through testing). Once a brand commits to experimentation, it can no longer operate based on the HiPPO (Highest-Paid-Person’s- Opinion). The mindset of experimentation may start on site pages, but it certainly can’t end there. It seeps into all the nuts and bolts of a brand’s strategy, influencing even admin and Ops.
If you are starting out with experimentation & CRO, keep these experimentation pitfalls in mind:
Here’s the thing. If your site is pretty good, you’re not going to get massive lifts all the time. In fact, massive lifts are very rare. If your site is crap, it’s easy to run tests that get a 50% lift all the time. But even that will run out.
Peep Laja, CXL.com
Most winning tests are going to give small gains—1%, 5%, 8%. Sometimes, a 1% lift can mean millions in revenue. It all depends on the absolute numbers we’re dealing with. But the main point is this: you need to look at it from a 12-month perspective
[WEBINAR] Understand the Culture of Experimentation
[WEBINAR] Why Tests Fail & Why That’s a Good Thing
While frameworks to prioritize A/B test ideas abound, actual experimentation frameworks that help you make sense of the data gleaned from your Research phase are relatively rare.
At least the ones that make sense to marketers and savvy CRO experts alike.
One framework that ties everything together in a logical, concise, and visual manner is developed by CRO agency, Conversion.
The model is pretty self-explanatory. In a snapshot, this is what it suggests:
Convert Experiences offers 40+ targeting filters to create granular audience segments:
A/B Testing tools like Convert Experiences offer the ability to include and exclude pages for experimentation and even trigger custom JS conditions on chosen site areas.
Plug this information into a hypothesis generator to create credible, simple hypotheses, which can then be tested for validity.
We reached out to our agency partners, Conversion Fanatics in the US and Browser to Buyer in the UK, to get a better sense of how this process differs across continents and markets.
Justin Christianson goes with a simple approach that eliminates unnecessary overwhelm. He focuses on:
Dave Gowans is methodical in his thinking. He recommends:
What is a hypothesis? Is it the same as a hypothesis in a scientific research program?
Before we investigate hypothesis and hypothesis generation, we need to understand that the changes we make to our websites, landing pages, and other online assets to improve user experience and ultimately conversion numbers can be clubbed under the umbrella of Online Controlled Experiments.
This serves to do two things:
In the world of Conversion Rate Optimization, a hypothesis is an educated and data-driven prediction that a specific change to an independent variable (like a site element or UX flow) will lead to a quantified impact on a dependent variable (represented by a metric you have already identified for measurement).
Sometimes a hypothesis is also defined as the cause-effect pair. The cause is the change to the independent variable, and the effect is the response of the dependent variable.
Here are some examples of independent variables in CRO are:
Examples of dependent variables include:
Let’s see this in action.
You have a form with 7 form fields. 5 out of the 7 are required. The form submission conversion rate is 3%.
Your first indicator of the fact that something is off comes from a State of the Industry report you read where the average conversion rate for your product and industry on a micro commitment like a form submission is 5%.
This is a red flag that your form could possibly perform better.
You invest in heat map data and user research. The qualitative data indicates high form abandonment when users are presented with the field to indicate their annual income.
Candidates in the user research drive express skepticism around why sensitive personal data is needed to process the request for a free 1:1 demo call.
Based on the information, you can see that potentially reducing the number of form fields and pruning the request for personal data can lead to more conversions.
In this example, the form fields are the independent variable. They can’t change unless you change them. The form submission is the dependent variable. If you add the request for credit card details, chances are the submissions will plummet further. If you reduce the form fields to just one — Email Address — the number of people who go through the process of submitting the form will likely increase.
The metric you choose to quantify the behavior change or the response of the dependent variable is also important. It should capture the change in the desired behavior — form submission — directly, and without room for misinterpretation.
In this case, the most logical metric is the form submission rate. You can choose a couple of secondary or guardrail metrics as well.
We have also written an in-depth guide on framing complex hypotheses for online properties where the low-hanging fruits have already been picked.
The key parts of a hypothesis are as follows:
Here is an example of a real hypothesis that has driven change at Convert.
The numbers have been altered to protect our data.
We Have Observed that the free trial form converts at 40% By reviewing the free trial Google Analytics funnel. We Wish To remove the form field requesting a contact number For the Segment mobile Visitors. This Will Lead To a smoother submission experience on a hand-held device leading to a 10 percentage point increase in form submission Measured By the free trial Google Analytics funnel. And The Test Will Run For 3 weeks. We Will Ensure The Test Remains Ethical By not changing any claims made for the core offer of the free trial and only eliminating the phone number field since it does not alter the trial experience.
While the process of getting to a hypothesis in most cases is fairly standardized, there is flexibility around what a hypothesis can read like.
Matt Beischel of CorvusCRO recommends a streamlined Comprehension – Response – Outcome formatwhere Comprehension stands for understanding the problem, Response stands for making targeted changes to solve the problem and the Outcome is self-explanatory.
After you’re done with a robust site audit, you will likely come up with dozens of ideas to improve outcomes for your business.
Good quality and reliable quantitative data help narrow down the options. If you have targets set in place for site performance, it is easy to go after metrics that are not pulling their weight to further investigate the problem areas.
But in general, prioritization is as important as analysis and experimentation. In fact, veteran optimizer Jonny Longden of Journey Further says:
“Intelligent prioritization is the key to increasing the pace at which conversion optimization delivers value…
But it isn’t enough to just prioritize according to what you think will work; this is just opinion layered on opinion.”
So how do you prevent the layering of opinions in the name of A/B testing & optimization?
Prioritization helps in two key stages.
The first is prioritizing research.
Don’t audit your entire site. And even if you do, don’t consider all site areas and workflows equal. If multiple user interactions feed into a critical metric that’s underperforming, choose to optimize the interactions that have a higher value.
While this won’t eliminate bias for good, it is better than the overwhelm of excessive data, and the inevitable meddling by HiPPOs to determine which data set drives optimization efforts.
When the research has concluded, prioritizing ideas with a framework proves useful. There are three popular frameworks leveraged by optimizers:
Here, P stands for the Potential of the idea that is about to be implemented. Potential is a vague term and stands for the anticipated improvement from the experiment.
I is for the Importance of traffic. What kind of traffic is being driven to the page or the location where the changes will be deployed? Echoing the sentiment of research prioritization, warm traffic that’s ready to convert deserves more optimization attention than cold traffic bouncing off an Awareness stage page.
E is Ease. How easy will it be to implement the change and push the experiment live? Will you need hundreds of thousands of visitors to reach statistical significance or is the sample size requirement similar to the traffic your site already receives?
In ICE,
I stands for Impact. What might be the impact of the experiment on the site?
C stands for Confidence. How confident is the optimizer that the test will reach significance and will indicate a better performing variant against the control?
E stands for Ease. Again, this is very similar to the PIE framework.
These frameworks are handy. But they have their drawbacks:
To counter these issues, CXL has created the PXL framework which introduces a degree of objectivity to some of the vague terms — impact, potential, confidence.
What it does is it breaks down the gut feeling about a test into quantifiable factors that are on paper and thus go a long way in preventing bias or blatant layering of opinions.
Here, optimizers take into account whether a test is:
To find the idea that goes through to testing.
Jakub Linowski of GoodUI interjects this narrative with a valid question.
Why do prioritization frameworks assume that all ideas are worth testing and thus have factors with only positive scores?
What about potentially detrimental ideas that can cause an actual dip in the existing performance of a page?
Think about it!
A scoring system that quantifies a “bad” idea can help optimizers come up with more contentious hypotheses, because the safety net of considerations will actively eliminate options that don’t make sense.
Often, the terms CRO & A/B testing are used interchangeably.
While A/B testing has garnered a lot of attention as the statistically sound way of identifying site changes that can influence user behavior driving more conversions, it is not the only spoke in the wheel of Conversion Rate Optimization.
Take a look at this section of the CXL 2020 State of Conversion Optimization Report. It ranks the effectiveness of various CRO components, as voted by optimizers.
It clearly shows that A/B testing is ONE of the ways in which conversions and outcomes can be improved.
Digital Analytics, Conversion Copywriting, Customer Surveys, User Testing… are the other candidates.
If you’ve been reading this guide for a while now, the question on your mind might be: Did we not use the methods of digital analysis, heat map tracking, user testing to uncover potential A/B testing opportunities?
Yes, we did.
And no, this does not mean A/B testing and the other components of Conversion Rate Optimization are mutually exclusive.
A/B testing puts science on your side. It is one of the most viable ways to establish a causal link between intended actions and results and translate data into actual revenue for your business.
Simply put, when you conduct an A/B test, you hypothesize that a particular change to an on-site element will lead to more of the user behavior that benefits your brand (in most cases, conversions).
And you put this educated assumption to the test by exposing a representative sample of your visitors to the control (the original version without any change) and another representative sample to the variant (the version with the hypothesis deployed).
The Role of A/B Testing Tools:
An A/B testing tool gives you a secure platform to put the variant live without hard coding these changes into your site.
It also buckets site traffic into seeing the control or one of the variants while respecting the proportion of the traffic that should participate in a test.
Plus, an A/B testing platform interprets the statistical aspect of the experiment for you, indicating a winner based on the confidence and power thresholds you set.
While it is highly recommended that you understand how your testing tool calls a winner, most of the heavy lifting can be delegated to a robust A/B testing tool.
If you are looking for a flicker-free, statistically transparent A/B testing platform, you can give Convert Experiences a free, 15-day spin.
When you specify a confidence level of 95% for an A/B test, you want to accept only a 5% chance (1 in 20) that any conversion rate improvement that may result from the experiment can be traced to randomness, and that there is no substantial difference between the control and the variant (null hypothesis) in terms of impacting desired conversion behaviors.
You can increase this confidence requirement to a high 99%. And you would quite possibly ensure that no false positive (the scenario where you don’t reject the null hypothesis when it’s true) creeps into your test.
The downside of having a very high statistical significance is the fact that your test may take a longer time to be significant, as the test would need to collect more samples.
Significance thresholds help experimenters confidently consider data for analysis.
On a site that does not receive millions of visitors a month, a 99% statistically significant test could take weeks to conclude. And in the meantime, if the changes you’ve made to the variant are in fact negatively impacting conversion rates, then your revenue could suffer, as you look for improvement opportunities. That is why, when your site has a lot of traffic, it’s always good to allow only a certain percentage of your total traffic to enter an experiment.
Determine the sample size needed for your A/B tests. We’ve made a calculator for you.
Through A/B testing you can work with a sample of your site traffic to deploy changes that work for your entire traffic flow. As long as external factors stay the same and something drastic like a recession that can alter the buying patterns of people does not happen!
When you start running A/B tests, you will frequently come across the term “A/A testing”. It is creating two versions of the same page with your A/B testing solution and pitching them against one another.
Here, the null hypothesis is intentionally true. There is no significant difference between the pages and thus traffic in the samples should similarly react to them.
In most cases, this is true.
A/A testing should not spit out vast differences in conversion rates (or your chosen goal metric). The goal is to make sure the experiment is well set up and the platform one uses is working properly, that there are no issues at the randomization and bucketing level for instance. And A/A testing requires a larger sample size compared to A/B testing to eliminate variations caused by randomness.
In any case, if you wish to go the A/A testing route:
Let us assume you’re running an A/B test on two landing pages selling the same product. The first one (we’ll call it A, or the control) does not have 3D product images. The second one (we’ll call it B, or the variation) has them.
The conversion rate in terms to “Add to Cart” is 7% for A and 9% for B. So should you just add 3D images to your product pages across the site?
No! Because all the visitors to your website have not seen page B and you can’t make assumptions about their preferences simply from observing the behavior of a much smaller sample size. Right? (PS: Don’t make assumptions in marketing or optimization… being data-driven is the way to go).
How to solve this little problem?
P-value comes to the rescue.
P-value will give you the probability that you have seen a 2 percentage point increase in the “Add to Cart” KPI for your variation (Page B) simply because of luck or other random factors.
The smaller the p-value, the greater the chance that adding the 3D images meaningfully contributed to the uplift in conversions and would likely apply to all the visitors coming to your website. Ideally you want your p-value to be inferior to 0.05, that corresponds to a 5% significance level, in other words a 95% confidence one.
You rarely need to calculate the p-value for your tests. This is done in the back-end by your A/B testing engine.
P-values do not operate alone.
They need to be held in check by something called a significance level or threshold. Think of it as the promise data scientists make to themselves to not fall in love with their hypotheses.
Say you are really impressed by those 3D images. To the point that even if the p-value tells you that there is a big chance that the increase in conversions means nothing, you still go ahead and roll them out.
Not a sound decision!
This is why A/B test results must be statistically significant. To rule out biases and make sure your budget is spent on the best bet!
The rule of thumb is to choose a probability level up to which your test’s p-value can go before you have to admit that the variation is no good.
This level is generally taken to be 5%. When you have a 5% risk tolerance, it means that if you randomly pick visitors from the people coming to your site, only in 5% of the cases (1 in 20) would you see a 2 percentage point increase in “Add to Carts” because of luck or noise.
95% of the time we can conclude with reasonable confidence that the 3D images have enhanced the shopping experience in some way, leading to the improvement.
The power of an A/B test is an intuitive concept. It is easier to grasp than the trio of p-value, confidence, and statistical significance.
It is the ability of the test to detect an improvement in the goal metric when the control and the variant are actually different in terms of how they impact user behavior.
A/B testing tools recommend a statistical power of 80% for regular experiments and a power setting in the high 90s if the experiment involves rolling out a change that’ll consume a lot of resources to go live.
The more power you give your test, the more “sensitive” it is to detecting conversion lifts. This is why tests with higher power have smaller Minimum Detectable Effects (MDE) and require larger sample sizes to conclude.
Despite best intentions, sometimes being 95% certain of your A/B test results is just not feasible in terms of the resources involved.
So should sites with a low volume of visitors just forego A/B testing?
The answer is no.
Low traffic volume sites should focus on CRO with more vigor and bolder changes (that can cause higher effects, that will reach significance faster). Likely, the resources they’ll need to quickly fix broken flows and get more revenue through the door are going to be less than the resources needed to increase traffic, either through costly PPC or through time-consuming SEO.
Smaller sites can always leverage other components of Conversion Rate Optimization (including heuristic analysis) for upticks and wins.
If they can reduce their risk aversion to go with lower confidence levels on changes that don’t cost a fortune to build out, they can run A/B tests to not only call winners but also gather data to inform future optimization efforts.
A “true” winning A/B test is elusive:
This is where concepts like CRO Maturity and the simpler Experimentation come in.
The premise is A/B tests are ways to learn more about the behavior of audience segments. The learning can be used to inform decisions, strategies, and changes, not just on a site, but across an organization and a business.
A/B testing is the foundation of an experiment-driven mindset. And as already stated, it is the most viable way marketers have to turn audience interactions with their branded touchpoints into more engagement, better site experiences, and ultimately more revenue.
The focus should be on learning first, immediate gratification second.
In the words of Matt Beischel from CorvusCRO, experimentation provides certainty in uncertain times.
Considering A/B testing in isolation will undermine its importance and turn a process that’s supposed to eliminate uncertainty into one that’s riddled with uncertainty.
Will the hypothesis prove to be true? Can the results be replicated outside the test environment? The list of what-ifs is unending.
A learning-first expectation from A/B testing — and not just a bid to improve conversion rates right away — can flip this needless anxiety.
Look at Netflix, Amazon, and LinkedIn…
The result of an A/B test is like an onion.
It can make you cry… in disappointment (if you’re in love with your hypothesis), or in joy (if you are eternally seeking truth and knowledge).
But most importantly, a winning variant is probably sitting on some shocking revelations. Simply put, a variant that wins may have lost in terms of performance for some audience segments.
This is where post segmentation comes into play.
It is an accepted best practice to look closer at the way a winner has behaved across all viable segments.
Most A/B testing tools like Convert Experiences allow testers to slice and dice the report based on factors like:
… so that no insight or learning is lost.
If you do find that a winning variant loses out to (say) the control for a segment like PPC traffic, you can either set the winner live for only the segments that truly found it useful, or you can start optimizing the control for PPC traffic in a new A/B test.
If the end goal is to learn and to improve user experience, not digging deeper is defeating the purpose of the exercise.
This may seem counterintuitive, but often a winner with a significantly higher conversion rate can actually make less money than the control or other losing variants.
This is a common occurrence in the eCommerce industry.
Drilling into the A/B test report to identify variant success by revenue is often a game-changer. If a particular experience ticks the checkmark of higher conversion rate (supposing that’s your primary goal) but loses out on the revenue requirement, then two conclusions can be drawn:
Convert Experiences reports offer the ability to look up variant winners based on revenue, plus the provision of easy Revenue Tracking as well.
How many tests should you run a month? In a year?
According to the State of Optimization report for 2019, CXL found that most optimizers run no more than 1 test a week. 2 in 5 run 2 tests per month.
While more testing is related to more learning, only you can determine the actual testing velocity for your program. This depends on several factors:
In any case, you have to start!
As you begin A/B testing, the frequency with which you win and the learning from the “losers” will help you evaluate the quality of your ideas, the integrity of your data collection processes, and the discipline with which you stick to segmenting results.
From here on, you can rinse and repeat to find a balance between test velocity and respecting the ground rules of rigorous experimentation.
It’s not enough to learn from an A/B test and keep it to the CRO team.
A/B testing needs buy-in — from everyone.
The best way to ensure that experimentation receives the funding it needs to move a company forward is to get all team members hooked to data. Data is addictive. It is so much easier to go off of a foundation that’s objective than to come up with ideas and arguments from scratch.
A test that’s properly conducted can blow a HiPPO to smithereens. That’s about the only kind of violence that’s needed in this brave new world we are navigating post the pandemic.
If you are running A/B tests, you need to be an adept story-teller.
Someone who can present the report in a way that goes beyond the dry run-down of statistics and focuses on the pain points of the customer, in this case, the rest of the organization grappling with opinion-laced hierarchies.
We have created a comprehensive guide on how you can present test results like a pro.
But in a snapshot, here is what you have to keep in mind:
You are a marketer.
You can make your stakeholders view Conversion Rate Optimization & A/B Testing as the key to business success.
Data privacy is a major concern for CRO and marketing.
When the EU passed the GDPR in 2018, data privacy became a multinational legal concern.
In 2020, the California Consumer Privacy Act (CCPA) looked to accomplish many of the same protections as GDPR. In Brazil, the LGPD (Lei Geral de Proteção de Dados) is to take effect in January 2021.
Unfortunately, there is no industry-wide guidance on the data privacy compliance for CRO and no case law to guide CRO specialists on the official interpretation of the laws.
The CCPA and GDPR have marketers shaking in their boots. But are they really the big boogeymen that we’re making them to be? Of course not! These regulations are intended to safeguard consumer data. Since we’re all consumers, this benefits everyone.
The best thing you can do is to understand your customers’ data. How is your department or agency collecting this information? Is it stored securely and in alignment with privacy laws?
Keep a finger on the pulse of your data practices at all times by following the steps below.
It’s easy to overlook the finer details of your marketing efforts. And when it comes to privacy compliance, it can be even easier to miss crucial components that need to be kept up to standard.
By having a designated data privacy person, you’ll be less likely to overlook critical errors. You can even outline a process for them to follow, which could involve combing over your lead gen channels, website, chatbots, and ads for compliance with GDPR, CCPA, and other.
You likely have many team members in your company — from salespeople to marketing strategists to website managers. If everyone is well-versed in data privacy, you can prevent user privacy violations at every stage and in every department.
Your website should have a regularly updated Privacy Policy which outlines how your website collects data and what it is used for. This will look different for every company, but it’s best to be more transparent than not.
Publicly specify how your Privacy Policy is aligned with the GDPR, CCPA, LGPD, and other consumer privacy laws. Having a clear Privacy Policy ensures that users know that their information is being collected and can opt-out if they want to.
If you are a B2B company, there may be times when businesses ask for access to consumer data. For these times, you need to have a procedure in place that outlines what information you can provide and to whom.
You can do this by documenting internal workflows, noting how data is used and stored. You can create templates for your customer service or sales reps to follow. Finally, you can log customer service requests and audit your files regularly to make sure the information is secure.
When you generate leads for your business (say, through email marketing or chatbots) do you give users a way to opt out? If not, you may be coercing them to hand over their information.
No matter what lead gen channel you use, you must give users a way to opt-out, unsubscribe, clear their data, or get removed from your list. It may seem like a lead loss, but it could save you $$$ in penalties.
Much like your Privacy Policy, your client contracts can also be updated to reflect how your company collects and uses consumer data. Here, it may be best to consult with a contract lawyer on the issue. Again, the point is to let leads and clients know how their information is being used.
Minors between the ages of 13-16 must opt-in when consenting to their information being collected and sold. For minors under the age of 13, you must obtain consent from their parents.
You must be transparent here. Make it uber clear how minors can opt in or out of having their information collected.
By doing so, you could avoid some costly penalties.
Can CRO still be done under these restrictions? Yes, absolutely. Many of the data privacy requirements align with best practices in CRO and marketing in general:
Thus, the GDPR, CCPA, LGPD, and any new privacy law aren’t going to destroy CRO. They’re simply giving a long-overdue facelift to the Wild West of internet data practices. As a leader, it’s your responsibility to market your brand while keeping your company compliant. Stay abreast of data regulations for smooth sailing.
Understand what the GDPR & E-Privacy mean for your favorite quantitative analysis tool — Google Analytics:
Impact of GDPR on Google Analytics
Impact of E-Privacy on Google Analytics
When you think of common CRO mistakes, most marketers probably seek a long list of things that negatively impact conversion rates across websites.
Too many form fields…
Slow loading sites…
Confusing menus and navigation…
If you are in search of more conversion follies, the folks over at BrainLabs have an excellent compilation.
Looking for practical experience conversion mistakes from an industry veteran? Dr. Karl Blanks has dropped a lot of wisdom about issues like pulling wrong levers in a bid to improve conversion rates, not understanding the intent of the traffic coming to the site, and more. You can explore them here.
What we would like to focus on here are the flawed mindsets and the myths surrounding Conversion Rate Optimization.
It’s more than the relentless bid to increase form submits and improve transactions. It’s a marathon, not a sprint, and is the low barrier entry into the realm of true data-guided decision making. So here goes:
In reality, it is a lot more.
Thankfully, veterans like Jonny Longden from Journey Further are shedding light on the difference between hasty optimization drives with the sole focus of finding winners, and true invested experimentation that collects learning from tests, and looks to find real opportunities with a significant impact on user experience, on revenue, and on the way, a business operates.
This distinction is even more important when companies bring onboard consultants and agency partners to scale testing programs. Setting the right expectations company wide from such an endeavor influences the perceived ROI.
An upside of looking at Conversion Rate Optimization with a wider lens is innovation. Sometimes you stumble upon data and insights that don’t contribute to optimizing a form or a landing page… but result in unconventional successes.
This myth needs to break.
A/B testing is a statistically sound way to validate the likelihood that any (positive) change in the conversion rate or overall performance of an asset — like a landing page or a form — stems from substantial improvements to the UX or to how prospects interact with them, and not dumb luck (or noise).
A/B testing should be accompanied by rigorous qualitative and quantitative data collection to craft a strong hypothesis that has been prioritized for potential.
A/B testing is just one spoke in the wheel of CRO. And it has to be supported by analytics, heat maps, user research surveys, click data, and more, to be effective.
CRO is a strange and unfamiliar discipline. It’s just better to pump money into growing traffic, right?
This assumption is never going to benefit a business again. Conversion Rate Optimization is a practice that comes naturally to human beings. We like to learn and we like to improve. This duo is the evolutionary advantage that has given us an edge over every other living creature. Fundamentally speaking, CRO too is a constant improvement.
According to the CXL State of Optimization report from 2019, most optimizers struggle to secure funds for their experimentation programs. And even when they do they are bogged down by:
To pull an organization out of the rut of thinking that CRO is at best an after-thought, marketers and testers must highlight the benefits of CRO beyond conversion lifts.
CRO is the basis of providing better experiences to prospects and is the most objective and data-fuelled way to be all about the customer.
You do need tools to gather insights, prioritize experiments, and A/B test ideas.
But a tool stack can only complement human genius. It can’t replace decades of professional acuity, knowledge, and the ability to modulate testing processes based on the risks involved in breaking rules — say, calling a winner early — given the context of the experiment.
Most businesses want the bells and whistles when it comes to choosing an A/B testing platform. But in reality, their teams end up using only a handful of key features that are frankly found in almost all the tools in the optimization space.
Conversion Rate Optimization calls for a plethora of skills — survey and heuristic knowledge, UX and design chops, copywriting abilities, the understanding to interpret data without bias, introductory statistics know-how, and dogged persuasion.
It is difficult and expensive to hire the right candidates. And upskill them to a point where they can make significant contributions to the optimization program.
It is in the best interest of the industry to reserve resources for a robust team over an exorbitantly priced tool
There are two things to keep in mind here:
We can easily argue that CRO & SEO are meant to go hand-in-hand.
SEO is a part and parcel of Conversion Rate Optimization, especially the on-page and technical aspects geared to load pages faster, eliminate broken links and jarring experiences, and seamlessly serve user requests while minimizing the consumption of site resources.
There is no doubt though that things are looking up for the relatively young industry of Conversion Rate Optimization.
Brands like Amazon and Netflix have made strong cases in favor of CRO and A/B testing, and thanks to the positive (and realistic) buzz around their success, businesses in the know are steadily prioritizing optimization, year over year.
We still have a long way to go and the first step is education.
Conversion Rate Optimization can fix broken funnels and open them up to scaling. Something all businesses understand, desire, and want.
We have also compiled a list of the top 14 user testing tools that optimizers and experimenters use on the daily.
A: There is no accepted benchmark for a “good conversion rate”. The most you can find is an average from reports like the one by Unbounce.
The data is industry specific. It lets you understand how your assets are performing when compared to a broad average. However, once you are aware of this number, it is best to set your own conversion rate standards, based on your company goals and KPIs.
A: We asked the CRO experts in the Journey Further Book Club to share the books that they found most helpful when they were learning the basics.
Here are their consolidated recommendations:
You Should Test That by Chris Goward
Whilst Chris Goward’s book is quite old now, its core principle that changes must be tested to be valid is timeless.
Web Analytics 2.0 by Avinash Kaushik
Avinash became a leader in the analytics world because of his ability to explain how to gather, read, interpret, and understand data. All essential skills for a CRO.
Experimentation Works: The Surprising Power of Business Experiments
Real-world examples of best-in-class CRO, featuring members of the Journey Further team.
A/B Testing by Dan Siroker and Pete Koomen
These guys went on to found Optimizely, but before they did that, Dan worked on the Obama election campaign and, with rudimentary tools, changed the world of experimentation forever.
Storytelling with Data by Cole Nussbaumer Knaflic
A beginner’s guide to learning how to visualize your data to tell a persuasive story on what needs to happen next.
The Journey Further Book Club is a community for time-pressured senior marketers and digital professionals. Please join the Journey Further Book Club community.
A: Opinions vary because the simple act of joining a group of data-driven marketers and optimizers means you get exposed to different points of view and learn from the mistakes and the efforts of others.
The following are some communities that Converters frequent and find value in.
The CXL Facebook Group: Moderated by CXL. Great place to hang out and keep a finger on the pulse of what optimizers are focusing on.
Conversion World: This is hardcore optimization & experimentation. Members mean business when it comes to CRO. There are online events, virtual round-tables, and e-conferences for the analytics experts in the know.
COVIDCRAP: A bold move to offer aid to struggling businesses in the aftermath of the pandemic, #COVIDCRAP is a community of optimizers who are looking to conduct consultation sessions with businesses that need CRO assistance. They have a corresponding Slack group that’s full of success stories and hard wins. This one keeps it real.
CRO.Cafe: This one isn’t exactly a community, but it is fast becoming almost a movement. Hosted by Guido Jansen, CRO.Cafe is a conversational take on everything Conversion Rate Optimization, remote work, and overall business success.
A: There are several great Conversion Rate Optimization degrees and courses out there, but we are going to stick with the one most Converters opt to take.
It’s the gold standard in the world of optimization.
The Conversion Optimization Mini Degree Program by CXL Institute.
If you are worried about the price point, you can also look into the scholarship program that’s in place at CXL.