Testing Mind Map Series: How to Think Like a CRO Pro (Part 80)

Cristina McGuire
By
September 22, 2025 ·

Interview with Cristina McGuire

There’s the experimentation everyone talks about. And then there’s how it actually happens.

We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to get buy-in when the numbers aren’t pretty.

They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.

And now they’re here: opening up their playbooks and sharing the good stuff.

This week, we’re chatting with Cristina McGuire, Lead Data Scientist – Experimentation at Chewy.

Cristina, tell us about yourself. What inspired you to get into testing & optimization?

Hi, I’m Cristina! I’m a statistician by training and started my career in analytics consulting back when the “data scientist” title was just emerging. My first DS role was in a software development team, where I built insights tools for grocery retailers. The work was broad in scope, exciting, and gave me significant growth opportunities — but after a few years I realized I missed having statistics at the core of my day-to-day work.

That changed when I joined the Experimentation Team at Expedia Group — and honestly, it felt like the exact role I had been craving. I discovered there was a place where I could solve problems with statistical methodology and scale those solutions across the business. I also loved working closely with the experimentation platform and analytics community, evangelizing the experimentation playbook, and spreading best practices across the company. That’s where I truly developed my passion for testing and optimization.

Now at Chewy, I’m continuing that journey at an earlier stage of maturity — shaping the foundations of experimentation and supporting teams as they build confidence in testing with the right methodology and guardrails. What excites me most about experimentation is that it’s both scientific and practical — anyone can be a scientist! By applying rigorous methods, you can generate insights that directly improve customer experiences and business outcomes through informed decision-making.

What are the top 3 things people MUST understand before they start experimenting?

If you’ve heard about experimentation and want to try it but have never done it before, here are three key things to understand:

  1. The benefits of experimentation. A/B testing isn’t about slowing down launches — it’s about making evidence-based decisions that can be repeated and trusted so you can move faster with confidence. Understanding the value of experimentation, and how it can help in your role, makes adoption far more meaningful.
  2. Experimentation is a team sport. You don’t need to be an expert in statistics or engineering to get started, but having the right tools and systems in place is critical. Strong collaboration between product, engineering, design, and data science makes experiments not only more reliable, but also more impactful.
CRO Expert Profile Cristina McGuire
  1. Decide how you’ll use the results. Not every experiment has the same purpose. Sometimes the goal is to make a launch decision, other times it’s to measure incremental financial impact, or simply to identify the best option among many. Each scenario requires a different level of rigor, so it’s important to clarify the decision framework upfront.

But above all, the most important thing is to get started, and then build best practices until experimentation becomes a natural part of the product development process.

CRO Expert Profile Cristina McGuire

How and where do data science & experimentation overlap?

Data science and experimentation are deeply connected. One of the most overused phrases in data science is “garbage in, garbage out” — but it’s true. Without good data and sound methodology, experiments can’t scale effectively because teams lose trust in the insights and fail to see the benefits. A strong data science foundation reduces analysis time, speeds up launches, and ensures results are both reliable and actionable.

From my experience, the level of data science involvement can vary widely. In some organizations, data scientists are hands-on in every stage of experimentation — from test setup and design to monitoring and analysis. In others, especially where the experimentation platform is mature and highly automated, DS involvement can be lighter, with scientists focusing more on generating new hypotheses, shaping product ideas with deeper insights, and conducting more thorough research beyond standard A/B testing.

If someone aspires to be a confident tester who understands the fundamentals of stats, where should they begin?

This is a great question. If you have access to an experimentation or data science team, that’s the best place to start — reach out to them and see if they have training materials or office hours. Learning from people who already run experiments in your org helps you connect the theory to practice. Every company has its own flavor — for example, Bayesian vs. Frequentist methods, or A/B tests vs Multi Arm Bandits vs. quasi-experiments.

Outside of work, there are plenty of accessible resources. I follow experimentation experts on LinkedIn and subscribe to newsletters to stay up to date on new topics. For specific questions, I often look at Medium blogs or research papers. When I was starting out, one paper I referred to a lot was Data-Driven Metric Development for Online Controlled Experiments: Seven Lessons Learned, and Ronny Kohavi’s book Trustworthy Online Controlled Experiments is also a great foundation.

I’d say you don’t need to dive into advanced math right away — just focus on core concepts like hypothesis testing, power analysis and readout interpretation, and why they matter in real experiments. Ultimately, the best way to build confidence is by running tests — even small ones — asking questions and seeing how the theory shows up in real results. More questions will come up and confidence will come from experience. 

Talk to us about some of the unique experiments you’ve run/helped run over the years?

The most interesting experiments I’ve worked on are the ones that go beyond the traditional playbook. A few examples:

  • Measuring long-term impact. Sometimes the true success metrics (like retention or lifetime value) happen outside the test window. In those cases, we’ve had to design ways to track and measure impact well beyond the experiment duration.
  • When randomization isn’t possible. In certain situations, we couldn’t randomize users into treatment and control groups. To handle this, we used matching techniques to create comparable groups and to draw as reliable insights as possible
  • Holdout experiments. Setting up a hold-out experiment to measure aggregate impact of experiment efforts. This helps ensure we understand not just the effect of one change, but the overall effect of our experimentation program on key business outcomes.

What are the top data analysis pitfalls testers should be aware of?

I’ve answered hundreds of experimentation questions in support channels, and here are some of the most common pitfalls I see:

  1. Not planning ahead. If you don’t think through your experiment strategy before launch, you often end up with longer timelines and tougher decisions at the end. Unclear metrics, missing guardrails, or not setting test duration can lead to tests that are either underpowered (too short) or wasteful (too long).
  2. Not sticking to the plan. It’s tempting to peek at results or cherry-pick the metrics that look good. But once you start moving the goalposts, you risk making a decision that isn’t backed by solid evidence — and that slows down your ability to scale experimentation.
  3. Ignoring data quality issues. Problems like sample ratio mismatch (SRM), logging gaps, or inconsistent tracking are red flags. If you proceed without fixing them, your results can’t be trusted no matter how sophisticated the analysis looks.
  4. Using the wrong statistical test. The right test depends on your metric type and experiment design. Missing this nuance can lead to wrong conclusions or missed learnings. In some cases, adjustments like multiple hypothesis correction are also necessary to reduce false discoveries.

In the spirit of making experimentation accessible, I don’t think the goal is to “police” these mistakes. Instead, the best way to prevent them is through strong experimentation platforms and clear playbooks. With the right tools in place, experimenters don’t need to stress about every statistical nuance — the system itself can discourage bad practices and alert users when something looks off, making it much easier to adopt experimentation best practices.

CRO Expert Profile Cristina McGuire

What is your take on AI in data & experimentation? What is real and what is hype?

AI is already reshaping roles in data science. The key is to use AI to elevate your skills, not replace them. For example, AI can automate repetitive tasks, speed up exploratory analysis, create quick visualizations, or even suggest experiment ideas. But what it can’t replace is critical thinking, business context, and decision-making.

I use AI every day at work (and outside), and it’s allowed me to focus on more impactful work by removing blockers — like quickly building apps and visualization tools, or drafting documentation faster. But when it comes to foundational knowledge and building code logic, I treat AI as an assistant, not a source of truth. It’s a powerful helper, but the responsibility for understanding and validating still sits with me.

In experimentation, as AI capabilities grow, teams may ship new features faster, which increases the need for experimentation to scale alongside them. On top of that, some of the features being tested will themselves be AI-driven — like AI agents or recommendation engines — making experimentation more important than ever to measure impact in a fast-changing product space.

The “real” is AI as a powerful accelerator for workflows and a driver of more complex features. The “hype” is expecting it to fully run experiments or make business decisions on its own.

Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.

And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, Hannah Davis, Jon Crowder, Mike Fawcett, Greg Wendel, and Sadie Neve.

CTA Linkedin
CTA Linkedin
Mobile reading? Scan this QR code and take this blog with you, wherever you go.
Written By
Cristina McGuire
Cristina McGuire
Cristina McGuire
Lead Data Scientist on the Experimentation Team at Chewy
Edited By
Carmen Apostu
Carmen Apostu
Carmen Apostu
Content strategist and growth lead. 1M+ words edited and counting.
Start your 15-day free trial now.
  • No credit card needed
  • Access to premium features
You can always change your preferences later.
You're Almost Done.
What Job(s) Do You Do at Work? * (Choose Up to 2 Options):
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!