Ask An Expert, with Joris Bryon
August saw Convert team up with quirky optimization expert Joris Bryon for a giveaway of his book “Kill Your Conversion Killers with The Dexter Method™”
If you are one of the lucky 10 people who got a free physical copy of the CRO must-read, congratulations!
But if you did not enter the giveaway or did not win, here is a recap of the Q and A that happened live with Joris in Twitter-land.
Q1. How do you avoid harming organic SEO when doing conversion optimization?
A: We haven’t really had problems with it ourselves. Google accepts A/B testing and even encourages it because it improves user experience. If you do a split URL test, add a canonical tag to the B-version though. That’s the main thing. You can read more about it here: https://support.google.com/webmasters/answer/7238431?hl=en.
Q2. How do you take data-driven decisions when there is still not enough traffic that could be used as data?
A: If you don’t have enough quantitative data, it’s better not to look at them, because you may interpret things the wrong way. E.g. you have 99 visitors and 2 converted. That’s a CR of almost 2%. Your 100th visitor converts as well. That’s a CR of 3%. All of a sudden you have a 50% increase in CR! Hurray! Of course, that kind of analysis is meaningless… It’s better to do a qualitative analysis in those cases, like doing an expert review or user testing. You don’t need traffic for that and yet you can still get valuable insights from it.
Q3. Is it okay to decide on a winner because a variant is “clearly winning”, even though significance has not been reached?
A: No, because the test can still flip. I’ve seen it happen many times: what seems to be a clear winner at first can still end up being a loser. If you stop it too early, you’ll implement a loser and that’ll cost you money.
Q4. What do you see as the most common mistake people make that has a negative effect on website conversion?
A: Probably the biggest one is thinking that everyone will convert right away. It’s like asking someone to marry you when you just met her. It takes time. You’ll need to get to know each other, then maybe go on a few dates and then that can evolve into a serious relationship. It’s the same with your site visitor: don’t expect them to buy from you right away.
Q5. How do you know which best practices work and which are conversion killers? Is it best to do trial and error?
A: Unfortunately there’s only one way to be sure: A/B testing. I wish there was a more straightforward way, but unfortunately there isn’t…
Q6. What minimum number of visitors would you recommend an eCommerce store have to start testing?
A: I don’t look at it in terms of visitors but in terms of conversions. We don’t recommend testing if you have less than 1000 transactions per month with your store. Testing will just take too long if you have e.g. 300 transactions per month.
Q7. Do you have any suggestions for success with lower traffic websites?
A: I highly recommend to do some thorough conversion research and then implement anything you’re pretty sure of based on the research. It’s not as good as A/B testing, but it’s a hell of a lot better than doing nothing at all. 😀
Q8. What advice would you give to new founders/entrepreneurs that you wish someone told you when you were starting out?
A: It depends on what kind of business they’re planning to start. If it’s a CRO agency, I’d say not to underestimate it – when you’re a CRO nerd like I am, you see the value of it, but a lot of businesses still think that throwing more traffic at their site will solve their problem. That’s frustrating to see, because you know you can make a difference, but a lot of businesses are just not at that point yet to start investing in it.
Q9. Why is it important to have one method for each agency (aka your own method)? What other methodologies from other agencies you find appealing?
A: The most important thing is that you follow a process based on data. Whether you give that process a name (like our Dexter Method) or not, is less important. A clear methodology however makes it easier for everyone else to understand what you’re doing and how, but at the end of the day what really matters is following a process based on data. And not just randomly changing and/or testing stuff.
Q10. What should I look for in an A/B Testing tool? Is it best for me to hire an agency rather than do it by myself?
A: I think speed is important (to avoid blinking) when it comes to picking the right tool. And I’m biased of course, but if you can afford it, yes, you should hire an agency – their win rate typically is a lot higher than when you try to do it yourself.
Also, you can’t beat the kind of experience an agency has. We’ve set up more than 1000 tests on online stores – it would take you ages to get that same kind of experience in house. And if you want to grow with CRO, you need to be consistent – you need to be testing a lot of stuff, all the time. If you try CRO on top of all the other things you’re already doing, you’re probably not going to set up enough tests to get value from it. I think agencies are the best option for stores with roughly less than $25M-$50M revenue a year. As of that point, it makes sense to start building a CRO team inhouse.
Q11. What challenges does ITP present optimizers and how do you think they will manage them?
A: Browser cookies expire after 7 days, that’s a problem for most tests, because in many cases it makes sense to let your test run for at least 2 weeks. However, there’s a workaround: you can set cookies server-side. At least, that’s how we do it. Here’s more background: https://bit.ly/31PPSOIRECOMMENDED RESOURCE: ITP is here, and Convert Experiences has a workaround. Read about it.
Q12. Is there a way to conclusively tell why a test lost or was inconclusive?
A: You’re never 100% sure, but if you use a hypothesis for testing you’ll come close enough.
A hypothesis should follow a structure of “Changing [identified problem] into [proposed solution] will result in [desired outcome].” If you see the [desired outcome] then you know that the [proposed solution] worked and so that the [identified problem] was indeed a problem. And if a test loses (or is inconclusive) you can turn it around: it probably wasn’t a problem, or the solution wasn’t the right solution.