Testing Mind Map Series: How to Think Like a CRO Pro (Part 81)
 
                                                                            Interview with Richard Joe
There’s the experimentation everyone talks about. And then there’s how it actually happens.
We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to get buy-in when the numbers aren’t pretty.
They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.
And now they’re here: opening up their playbooks and sharing the good stuff.
This week, we’re chatting with Richard Joe, CRO Manager at Cheil.
Richard, tell us about yourself. What inspired you to get into testing & optimization?
It was about 10 years ago, and I was working as a front-end web developer at a dev agency, working on ecommerce sites. There was a guy in my network who started a CRO agency and posted on his website blog how they were doing A/B tests on their client websites. I didn’t know what this stuff was about, but found it fascinating that we could apply the scientific method to live audiences via experiments to see how they interacted with them.
It was a few years later, when I was working as a generalist digital marketer, that I tinkered around with launching my first A/B tests, and from then on, I was hooked!
Aside from the CXL courses, what resources would you recommend to aspiring testers & optimizers?
I would highly recommend finding a good mentor. No course can teach you everything and by having a good mentor with years of experience, you’ll be able to leverage their knowledge without having to make repeated mistakes and false assumptions that occur when you’re working in a vacuum.
Answer in 5 words or less: What is the discipline of optimization to you?
Challenge your biases and assumptions.
What are the top 3 things people MUST understand before they start optimizing?
They need to understand what the business goals are from an overarching level. For instance, say the business wants to bring in an additional revenue goal of $1 million per annum, then that must be funneled into experimentation goals. Doing this enables your CRO program to be aligned with your business goals and additionally signals to relevant internal stakeholders the justification for why it should exist.
Another important thing is that you have trustworthy data that you can trust. As we typically tend to validate experiments via an A/B testing tool, it’s important that we can trust that data. This extends further to our analytics platform, say we’re using GA4, then we need to trust its setup properly by doing an audit, and also that data is properly flowing from our A/B testing tool to GA4, so when it comes to doing post-test analysis, we can trust the numbers.
Another thing is to be data-led, not emotionally led, when it comes to how we approach experimentation. It’s very easy to be sidetracked by stakeholders telling you to test this and that just because they read a case study in a blog. But every website is specific and therefore has its own specific set of user problems. This means being led by both quantitative and qualitative data in gaining a holistic understanding of the ‘what’ and ‘why’ a specific cohort is experiencing an issue on your site and developing a testable hypothesis.
How do you treat qualitative & quantitative data to minimize bias?
We’ve all got our pre-determined biases when it comes to testing, so I don’t believe you can remove them 100% but we can minimize them by doing the following:
For quantitative data:
Ensure our analytics tools are testing properly with no sample mismatch ratios (SRM). This ensures we can trust the data in our testing tools.
Also, it’s good to get into a good habit of predefining our success metrics for A/B tests. Have a good think about the primary and secondary metrics as a result of your test.
When running a test, don’t pause and call it a winner just because you’ve reached significance. You want to avoid what stat nerds call ‘peeking’, whereby you call a test a winner but then later on may revert back to the null hypothesis. You’ll avoid this by getting into the discipline of doing pre-test analysis before you launch a test, so that you’ll know how long to run it for based on your selected MDE.
For qualitative data:
If doing moderated user interviews, avoid leading questions which tend to bias your participants, e.g., instead of asking “Did you find this checkout frustrating?”, ask instead “How would you rank your checkout experience?”
Look for trends and patterns. This means we take the comment of one angry user with a grain of salt, as that likely doesn’t represent the majority of users. This applies to all qualitative measures like user testing, surveys, and reviews, etc.
Lastly, ensure you integrate your quant and qual data. Firstly, we want to use quant to explain the ‘what’ part of the problem, e.g., your analytics platform might be showing a 40% checkout abandonment. Secondly, the qual side explains the ‘why’ part of the problem. So in this example, session replays might show users are finding that the coupon field has a bug when applying the coupon.
Where is optimization as a lever in the fields of growth & marketing headed? How is AI influencing the shifts?
Thanks to developments in AI, we’re going to move away from only relying on manual optimisation, which involves heavy dev builds to a live audience, to being able to launch an experiment using AI agents to audiences that mimic real-life personas. This will enable ultra-fast feedback loops at a cheaper cost.
We’re already seeing the quick adoption of platforms like ChatGPT start to erode users’ reliance on Google for finding answers, and this has given birth to Generative Engine Optimization (GEO), which is optimizing content for LLMs.
We’re also going to be seeing how AI can allow companies to hyper-personalise to the masses at scale. The level at which users will experience marketing in a very individual way (as opposed to generic segments) will allow companies not only to acquire customers better but will also lead to improvements in customer loyalty and experience.
Talk to us about some of the unique experiments you’ve run over the years.
At one of the places I was working, I noticed that one of their top-ranking websites from GA was an internet login page. This initially felt random, but when looking at Google Search Console, I figured out that most of the users coming to this page were coming in through organic search terms. Another relevant data point was that we had a large user base that was from an older demographic.
I had a hypothesis that we had existing customers who were googling our login details, as opposed to directly going to our website. I then decided that we would cross-sell them another product using the principle of choice architecture.
Fortunately, this test won after a month of testing. I didn’t have all the data points I would have liked to back my hypothesis, but sometimes it goes to show that you can’t fail by testing!
What are the top ‘jaw-dropping’ revelations and controversial statements you’ve come across as the host of the Experiment Nation podcast?
There’s a few!
One of the most jaw-dropping and controversial statements was when Ton Wessling told me that “CRO will die”. This was used as a tagline to get people’s attention, but was totally taken out of context and went viral in the CRO community, with some people arguing how we’ll still have job security, etc. To give correction, he was simply stating that the role of CRO will no longer sit with a specialist, but that this process will be democratized within an organisation.
Another controversial moment was when I interviewed Timothy Chan, who worked at Facebook as a data scientist. He explained to me that they ran experiments at such speed and scale that they didn’t mind running experiments on the same page at the same time! I found this hard to believe, because I had always been taught and thought logically that it’s best to either wait for an experiment to finish before running another one, or to set them up as mutually exclusive experiments. He explained that the interaction effects weren’t that dramatic, and that opened up the idea that you could sacrifice data purity somewhat for speed and scale.
Another controversial position was when Chris Mercer from Measurement Marketing made a comment that when he makes changes on a website, he prefers to make it live on the actual website and measure a few days later to see what the impacts were! This went totally against my framework of launching an A/B test for a period of time, gaining samples from a control and a variant, and performing statistical analysis on results. I think he was coming from a very pragmatic stance. Since then, I have been in a similar position of making direct changes on specific areas of a site, partly due to technical reasons but also pragmatic reasons. I would caveat, however, if you’re going to do this, to have a backup plan – you want to measure conversion rate daily for a period of time and ensure you have an archived version of your page that your dev can quickly revert back to if you find your change has had a negative impact on results.
Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.
And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, Hannah Davis, Jon Crowder, Mike Fawcett, Greg Wendel, Sadie Neve, and Cristina McGuire.
		
			
    Written By
			
				Richard Joe
			
		
		
			
		
	
	 
				
		
			
    Edited By
			
				Carmen Apostu
			
		
		
			
		
	
	 
				



