Testing Mind Map Series: How to Think Like a CRO Pro (Part 76)

Jon Crowder
By
June 19, 2025 ·

Interview with Jon Crowder

There’s the experimentation everyone talks about. And then there’s how it actually happens.

We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to get buy-in when the numbers aren’t pretty.

They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.

And now they’re here: opening up their playbooks and sharing the good stuff.

This week, we’re chatting with Jon Crowder, Digital Experience Director at Journey Further.

Jon, tell us about yourself. What inspired you to get into testing & optimization?

I’ve been doing this for 15 years. In all that time, there’s still no consistent academic route into experimentation or CRO, so most practitioners start out in common, complementary disciplines. They have backgrounds in analysis, user experience design or research, front end development sometimes too. I think the best experimenters spread themselves across multiple of these categories, but they often have one linking thing in common. They’re frustrated. Their single discipline separates them from making meaningful change. The analyst can see what is holding a business back, but cannot propose and deliver solutions. The designer can design for delight, but is expected to deliver a highly polished, perfect design first time, and then that design comes unstuck when it meets real users. The dev is told what to code and why, and then must decide on the most elegant way to construct it, but by that time, whether it would be useful, or smooth, or effective was out of their control.

A role in experimentation crosses all those borders. You have to have a deep and empathetic understanding of your user. You have to deeply understand their wants and needs and how that relates to your product. Once you understand this, you get to design solutions and interventions and then run REAL users through them to see if you had the desired effect. The methodology works it all out for you and allows you to be risk-averse whilst remaining adventurous, without having to gamble huge amounts of capital on outcomes.

That’s why I got into this. I started as a Test & Learn analyst for Sky UK in 2011, helping to bring web chat to a business, and to define how and what it should be in order to meet its goals of being cheaper to run operationally than a call centre whilst also keeping consistently high NPS and customer satisfaction goals for each user. After showing a flair for that, I had a few years in a digital compliance role, which gave me a thorough and effective grounding in regulation and data protection at the enterprise level. This is something I’m still enormously grateful for as I draw from it today when understanding the ethical way to capture and use data when crafting experiences.

Then I moved into what might now be understood as a pure CRO role, although that term was relatively new at the time. Me and Jonny Longden built the Sky UK experimentation programme from the ground up, and through that, I got a huge level of experience in experimentation and personalisation as a discipline, as well as in the digital transformation and cultural transformation required to make a company of that size data-driven and experimental. We had to build everything, from a consistent, universal and well-implemented analytics setup capable of understanding user behaviour to actual experimentation tools capable of delivering code in the front and back end.

I’ve found the process of being an experimenter to be tremendously freeing. I’ve long had a deep curiosity about human behaviour and why people behave the way they do, as well as how we can help shape that behaviour. Experimentation lets me use that curiosity and empathy to help the wonderful brands we work with to deliver smarter, better-performing and more connected experiences online, which really works for their business and their customers.

What’s the one resource you recommend to aspiring testers & optimizers?

Get really good at asking questions. There’s a bunch of great courses out there around experimentation, many resources offered by tool vendors on how to use their tool to do what you need to do, but I’m set on the approach that we have to get very good at asking questions, and start from the scientific method to help us understand how users engage with digital journeys.

Once you know the questions you want to ask, you can then define how you get the info and build on that. If your question is “How do users interact with this website?” then you can go and pull out info from analytics or experience tools, you can survey or interview users, you can observe users in this habitat. But it ALL starts from learning to ask questions, making no unreasonable assumptions, and having a voracious curiosity to find out why things happen.

Answer in 5 words or less: What is the discipline of optimization to you?

Human obsession, realised through design

How (to you) is experimentation different from CRO?

CRO as a term can be meaningful, but it’s fundamentally flawed by the sheer breadth of what it can cover. Some agencies or practitioners will try to sell you “CRO”, and what’s actually being sold is a playbook of unqualified recommendations attached to some promises around performance. This does our discipline a disservice, in my opinion. Truly effective experimentation isn’t derived from first principles and can’t be sold as a one-size-fits-all playbook that works for everyone. If it were, you wouldn’t need me or my team. You could just buy the best-rated Shopify template and be done with it.

True experimentation is inseparable from the user, because it’s the needs of that ever-shifting and complex group that we’re trying to meet and influence. If your experimentation function isn’t endlessly interrogating the user in an effort to better understand them, run a mile.

Over and above this, there are other crucial differences. CRO often results in the ‘tweaking’ of UI elements, content or other artifacts to improve a conversion rate. This might be suitable for some websites or apps, but it’s a smaller piece of the overall puzzle. The true benefit is realised by bringing entirely new journeys, features and products to life through data-driven, mature and experienced research and experimentation. This limits risk and exposure whilst also ensuring you innovate. Do this for a year or two and you’ll see your competitors start to copy you, lagging behind you as you drive true innovation in your digital experiences. The term CRO doesn’t cover this level of strategic thinking around experimentation.

What are the top 3 things people MUST understand before they start optimizing?

Experimentation must be a journey that the whole business goes on. If you’re a new methodology, regardless of your results, you will face an uphill challenge in changing hearts and minds. Building journeys based on best guesses, past experience, competitor research and general sentiment is how it’s always been done in the past, and it would gladly continue like that without a new methodology, even if the new methodology is more effective. Experimentation works, but it’s also a far more transparent process. If something doesn’t have the desired outcome, there’s nowhere to hide.

Therefore, you’ve got to consider the following:

  1. How do we bring everyone on the journey, in a way that everyone wins? Is the business set up with OKRs and a culture that allows for this? If not… How do we get there?
  2. If you’re an agency, you need to consider how you’re going to make your stakeholder look good. This isn’t as easy as just winning some experiments, though that sure helps! You also need to arm your stakeholder with the right comms and resources to internally PR the journey you’ve been on, and how it has delivered better outcomes for you and your users.
  3. The journey is not linear, and you’re going to have small bets that win big and big bets that you were sure of that do not perform. This is all part of the game and you should learn to love all of these outcomes because they’re all valuable.

How do you treat qualitative & quantitative data to minimize bias?

There’s a number of fantastic things you can do. Firstly, define your experiment bounds before you run. Know what you’re measuring and how you’ll measure it. Document it publicly and clearly, and then after the experiment is concluded, that’s the brief you weigh your results against.

Even more advanced methodologies would double blind their practitioners. This is often a little hardcore for marketing and ecommerce purposes, but if you’re really serious about eliminating bias, the person who interprets the results shouldn’t know what treatment they’re reporting on, and that should only become known after the results are published.

Other standard techniques for data quality are out there. Adhere to decent sample sizes, make sure to understand if outliers are influencing your results, using both traditional methods and also using tools that help define segments experiencing strong standard deviations of difference automatically, etc.

When trying to remove bias from qual data, you can do specific things appropriate to the research task. Don’t lead your user into doing what you want. Keep instructions minimal because you want to observe how they problem solve, rather than how they follow instructions. Feel free to ask questions about why a user did something, but don’t suggest WHY they might have done something, etc.

Also, know the limits of what you are measuring. Survey data about why people behave a certain way will not give you an honest appraisal of why people behave the way they do. It’ll give you an honest appraisal of how they view their own actions and what they’re willing to vocalise about their actions, but you’re viewing that through a lens of that user’s self-perception, and that’s often packaged with biases.

Talk to us about the unique experiments you’ve run over the years.

At Sky, I helped bring entirely new journeys to our Sky Broadband offering which, contextually, helped users to self-serve in a way they’d not been able to previously. The business was reluctant to open up ‘line checking’ as an online service people could perform themselves because it could be used maliciously, so historically they’d always wanted a Sky employee to perform the line test to control this. This had the unfortunate effect of making users have to phone every time they had a broadband issue, and rendering online advice impractical and frustrating. By identifying users at the account level, we could ensure they could only line-test themselves, and then based on that we can follow up with actual meaningful intervention (eg. your line test shows there’s a fault, so we’ve booked an engineer to come out and look at it) whilst minimising risk.

This intervention saved millions of pounds a year in call centre costs, whilst also improving customer satisfaction scores and time-to-resolve broadband issues.

I started working with Krispy Kreme during the pandemic. Traditionally, KK is heavily focused on brick-and-mortar offerings, with its website mostly a collection of rich content for driving traffic to the online ‘store finder’ so they could go shop in person. During the pandemic, they were forced to close almost all brick-and-mortar stores for an extended period.  They rapidly developed an online ecommerce proposition that would deliver boxes of doughnuts d2c. We helped them drive a huge number of changes on that website that attempted to recreate the joyful in-store experience of ordering doughnuts, whilst communicating all the unknowns effectively around packaging, freshness and what-to-expect.

The Donkey Sanctuary is a current client of ours and they have the goal of making every single donation work as hard as possible, whilst also trying to attract as much reliable funding as possible so they can continue their exceptional work of rescuing and caring for donkeys. This one has been a lot of fun because our most successful experiences have been about giving users a delightful experience centred around the donkeys and their lives. Experiments like highlighting a Donkey who has a similar birthday to you (or the person you’re buying the gift for) so sponsoring that Donkey feels relevant and contextual. Bringing out clear aspects of each donkey’s personality or info on which donkeys are close friends so you can adopt both together. Each one of these changes was from the deep empathy of understanding how our users are feeling in their journey, and the motivations driving them and how we can better meet each of those motivations.

Recently, we’ve been working with a major insurance brand on one of their sub-brands aimed at providing travel insurance for older people or people with long-term health conditions. This sector of the market has specific needs, but is also part of a demographic for whom accessibility and trust are the highest possible motivators. We have had success with a number of experiments for them which centre around transparency and clear messaging, but I also know that at the other end of this process is a person who found travel insurance a bit easier, less stressful and more trustworthy, all because of the work we’ve done.

According to you, how is AI shaping experimentation? 

AI has the potential to help us with some of the more manual work of experimentation, as well as reducing time-to-decisioning in a number of processes. I still strongly believe it shouldn’t be writing content or designing with no oversight. The more we dig in, the more we find that users are looking for a genuine connection and trust online, and if a user knows something was produced by AI, that trust massively falls off.

Where we’ve successfully used it, however, is in a few of our processes. It’s very simple to get low-fidelity wireframes together for specific designs, and in our building and QA-testing processes, we’ve used AI tools to help produce faster, more robust and consistent designs, and to ensure the code we’re using is annotated, effective and clean.

We also have had some success in using LLM tools to summarise longer research outputs for storage in a database which then powers the reporting and operating system we offer each of our clients as standard, as well as specific applications of machine learning / AI tools for turning unstructured data into structured data for analysis and insight.

CRO Expert Profile Jon Crowder

Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.

And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, and Hannah Davis.

Mobile reading? Scan this QR code and take this blog with you, wherever you go.
Written By
Jon Crowder
Jon Crowder
Jon Crowder
Digital Experience Director at Journey Further
Edited By
Carmen Apostu
Carmen Apostu
Carmen Apostu
Content strategist and growth lead. 1M+ words edited and counting.
Start your 15-day free trial now.
  • No credit card needed
  • Access to premium features
You can always change your preferences later.
You're Almost Done.
What Job(s) Do You Do at Work? * (Choose Up to 2 Options):
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!