Many people define a hypothesis as an “educated guess”.
To be more precise, a properly constructed hypothesis predicts a possible outcome to an experiment or a test where one variable (the independent one) is tweaked and/or modified and the impact is measured by the change in behavior of another variable (generally the dependent one).
A hypothesis should be specific (it should clearly define what is being altered and what is the expected impact), data-driven (the changes being made to the independent variable should be based on historic data or theories that have been proven in the past), and testable (it should be possible to conduct the proposed test in a controlled environment to establish the relationship between the variables involved, and disprove the hypothesis - should it be untrue.)
According to an analysis of over 28,000 tests run using the Convert Experiences platform, only 1 in 5 tests proves to be statistically significant.
While more and more debate is opening up around sticking to the concept of 95% statistical significance, it is still a valid rule of thumb for optimizers who do not want to get into the fray with peeking vs. no peeking, and custom stopping rules for experiments.
There might be a multitude of reasons why a test does not reach statistical significance. But framing a tenable hypothesis that already proves itself logistically feasible on paper is a better starting point than a hastily assembled assumption.
Moreover, the aim of an A/B test may be to extract a learning, but some learnings come with heavy costs. 26% decrease in conversion rates to be specific.
A robust hypothesis may not be the answer to all testing woes, but it does help prioritisation of possible solutions and leads testing teams to pick low hanging fruits.
An A/B test should be treated with the same rigour as tests conducted in laboratories. That is an easy way to guarantee better hypotheses, more relevant experiments, and ultimately more profitable optimization programs.
The focus of an A/B test should be on first extracting a learning, and then monetizing it in the form of increased registration completions, better cart conversions and more revenue.
If that is true, then an A/B test hypothesis is not very different from a regular scientific hypothesis. With a couple of interesting points to note: