Testing Mind Map Series: How to Think Like a CRO Pro (Part 79)
Interview with Sadie Neve
There’s the experimentation everyone talks about. And then there’s how it actually happens.
We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to get buy-in when the numbers aren’t pretty.
They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.
And now they’re here: opening up their playbooks and sharing the good stuff.
This week, we’re chatting with Sadie Neve, Lead Optimisation Manager at Tesco.
Sadie, tell us about yourself. What inspired you to get into testing & optimization?
Hi, I’m Sadie! Currently a Lead Optimisation Manager at Tesco, a British multinational groceries and general merchandise retailer.
My fascination with experimentation began nearly a decade ago, rooted in my academic background in Psychology. I’ve always loved diving into human behaviour, especially the scientific method behind understanding it. Observing patterns, forming hypotheses, running experiments and discovering genuine insights was something I thoroughly enjoyed!
When I finished university, I sat down and analysed what I enjoyed about my studies and it all pointed towards experimentation and evidence-based decision-making. That’s when I stumbled upon the world of CRO and A/B testing. I landed my first role in the industry soon after and haven’t looked back since!
How many years have you been testing for?
It will be 9 years, 13 years if you count my degrees!
What’s the one resource you recommend to aspiring testers & optimizers?
There’s no shortage of material out there, but if I had unlimited time and budget, I’d go to as many experimentation conferences as possible. They’ve become incredibly interactive, and when the speaker lineup reflects diverse thinking, they’re transformative learning spaces. Right now, CreativeCX, a London-based specialist experimentation agency, are hosting a fantastic meet-up series featuring brilliant speakers and real-world insights.
As for more convenient resources, I like to pull inspiration from unexpected places. One of my favourites is Think Like a Rocket Scientist by Ozan Varol. It reimagines aerospace thinking into bold strategies for innovation and experimentation. Think big, test boldly, and embrace failure, because ‘if you stick to the familiar, you won’t find the unexpected’.
Answer in 5 words or less: What is the discipline of optimization to you?
Scientific playground for curious minds!
What are the top 3 things people MUST understand before they start optimizing?
- Build on the foundations of the scientific method – even outside digital! Optimisation is essentially science in action. Understanding the full experimental process, from making observations to measuring statistical significance, gives you the power to design rigorous, meaningful tests. My background in psychology gave me this grounding, and it’s helped me adapt experimentation frameworks and processes for different businesses while preserving integrity and precision. Understanding the foundations means you don’t blindly follow industry standards, but you know why it exists and how to apply them wisely!
- Every hypothesis deserves daylight! Success in experimentation isn’t about proving your own ideas right, it’s about unlocking success through data, creativity and diverse thinking. Sadly, ego has no seat at this testing table! The best outcomes often come from open collaboration and cross-functional teams, where anyone’s hypothesis can spark breakthroughs. If it’s data-informed and aligned with what success looks like, then it’s worth adding to your backlog!
- Expect failure and value it! This one can’t be said enough. I still see teams shy away from sharing ‘unsuccessful’ test results, but a losing test isn’t a failure, it’s a lesson. Every outcome teaches us something new. LinkedIn ran a study over 7 months that showed iterative experiments (those built on previous learnings) improved a key performance metric by 20% compared to static, one-off tests. When you embrace failure as part of the process, you create space for compounding insights and genuine success.
How do you treat qualitative & quantitative data to minimize bias?
Biases will naturally creep into our experiments as we are wired to seek patterns. Minimising bias starts with acknowledging it, both in how we collect data and interpret it.
Quantitative data is often seen as objective, but the design of the experiment, the metrics chosen, and the segmentation applied can all introduce bias. To reduce this, it’s crucial to clearly define what success looks like, maintain statistical integrity, and avoid having pre-conceived expectations of the outcomes.
Meanwhile, qualitative data is rich with context but open to interpretation. To minimise bias here, I focus on gathering insights from a diverse set of users and use structured frameworks to break down responses. Anecdotes are valuable, but I treat them as signals to investigate with other research parallels!
By balancing both data types, understanding that they complement each other rather than compete, allows us to gain better, less biased experiments.
How (to you) is experimentation different from CRO?
Let’s settle this once and for all, experimentation ≠ CRO. And yes, it’s definitely a pet peeve of mine that the two still get used interchangeably.
CRO is a subset of experimentation, focused on improving a specific metric – conversion rate. Limiting our lens to that alone can be reductive. Experimentation is broader and far more powerful because it focuses on the customer and driving informed change across the entire digital and offline experience.
Where CRO is results-driven, experimentation is learning-driven. It’s less about chasing short-term uplifts and more about fuelling long-term strategy and decision-making, encouraging businesses to ask why something worked (or didn’t), so that every test becomes a foundation for growth.
I even get annoyed about the term ‘A/B testing’ as well. It’s often talked about in terms of just one variant against a control. But real experimentation embraces multiple variants, understanding that it’s a tool for exploration, not just validation!
Talk to us about some of the unique experiments you’ve run over the years.
I’ve had the chance to run experiments across a range of industries – from gambling to charity, to retail and beauty. Each one has taught me something new and allowed me to test some really exciting changes and products.
One of the most memorable projects at Tesco was Unpacked – our version of Spotify Unwrapped, tailored to customers’ shopping habits. It was built entirely with the customer in mind, designed to surprise, delight and inform them about their everyday habits! The engagement we saw was phenomenal, reinforcing how experimentation can be used to drive emotion as well as behaviour.
But one of my favourite experiments was an exploratory test for a major beauty booking platform. The business was considering adding photo functionality to customer reviews to increase credibility and trust. It was a significant development investment, so instead of guessing, we ran a fake door test to gauge interest. Customers saw an option to ‘add a photo’ to their review, but since the feature didn’t exist yet, we used that click-through rate to measure demand. We paired this with a short qualitative survey to uncover why people wanted to add photos and which categories they found most relevant. Turns out mani-pedis were a big yes, but hair removal.. Not so much!
The experiment perfectly illustrated the power of testing beyond metrics. It gave us demand made to support product decision, customer context to shape the feature’s design and category-level insights to prioritise rollout. A great example of how experimentation is more than validation!
Your take on AI in optimization & experimentation:
AI is dominating the conversation right now, from conferences to LinkedIn posts! But truthfully, I’m still not convinced it’s making a real impact in testing programmes just yet. Right now, AI often gets pitched as a shortcut – something to fast-track the 80/20 or reduce time spent on execution, and I believe that AI does play a role here. However, in my view, AI’s real potential in optimisation lies in synthesis, the ability to connect dots across disconnected data sources, to reveal nuanced behavioural patterns, friction points, and opportunities that we humans might overlook.
The future I’m excited about isn’t AI running tests for us or choosing variant winners, it’s AI helping us ask better questions, spot richer hypotheses and scale the discovery process. With this mindset, hopefully, we can see how experimentation teams are empowered and not replaced!
Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.
And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, Hannah Davis, Jon Crowder, Mike Fawcett, and Greg Wendel.
Written By
Sadie Neve
Edited By
Carmen Apostu




