Testing Mind Map Series: How to Think Like a CRO Pro (Part 82)

Ruud van der Veer
By
October 20, 2025 ·

Interview with Ruud van der Veer

There’s the experimentation everyone talks about. And then there’s how it actually happens.

We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to get buy-in when the numbers aren’t pretty.

They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.

And now they’re here: opening up their playbooks and sharing the good stuff.

This week, we’re chatting with Ruud van der Veer, Lead CRO & Data @ Happy Horizon.

Ruud, tell us about yourself. What inspired you to get into testing & optimization?

Earlier in my career, I worked in usability consulting, but I never actually tested or validated the effect of my advice. Most of my recommendations were based on scientific research (this was early (20)10’s era), but I never saw or measured the impact of my consultancy. I also worked as a designer for a couple of years, where I found out that solely creative work (without knowing its effect) was never satisfying for me.

I then switched to a performance marketing job where I found it very interesting to see the impact of my changes, through before-and-after analysis (at least, I thought it was because of me!). I later combined my design background with my performance experience, making sure to grow within the mindset that actions should have a measurable effect. When stepping down from performance marketing, I solely focused on optimizing website experiences. Experimentation, CRO and A/B testing as validation methods were then a step made shortly after.

What’s the one resource you recommend to aspiring testers & optimizers?

Start by following leading optimizers in your country and internationally via LinkedIn, their blogs or newsletters. Get familiar with the topics they address, but keep in mind that you do not have to talk about the same things. There’s a good chance that your specific situation doesn’t need all the top innovative stuff they are talking about. But let it inspire you to explore topics you are unfamiliar with. Then make it your own and convert it to your daily struggles. Like experimentation itself, your own development should be step by step.

Also, keep an eye for local meetup events, webinars or other events within CRO, experimentation or digital analytics. Try and literally speak with people there. Ask them to connect and keep engaged with their content.

The reason I am emphasizing other people to follow doesn’t mean you shouldn’t think for yourself, but this way, you’ll get familiar with the topics that matter most to others. Especially when starting, things could be overwhelming, and you feel like hopping on every new insight. But keep looking out for content and you’ll start knowing what is most important for you at the time.

Answer in 5 words or fewer: What is the discipline of optimization to you?

Growth without risk*
*It’s never without risk, but a bold statement like this sounds better 🙂

Quote graphic with the question 'What is the discipline of optimization to you?' and the answer 'Growth without risk*
*It’s never without risk, but a bold statement like this sounds better :)'

What are the top 3 things people MUST understand before they start optimizing?

1. Optimization is more about the process of learning than about finding winners, but winners still determine your business case (and future). Yes, your hypothesis is important, but knowing what works and what doesn’t work (and why) is more important. So don’t spend too much time on developing your case for every specific experiment you are going to run. Run it! Sometimes testing for exploratory reasons is just as good as testing with pre-defined hypotheses. Exploratory testing is just faster. Therefore, you’ll learn faster. Velocity matters. Just don’t fall into the trap of treating this like a one-time project – grabbing a “top 100 A/B tests” list from LinkedIn and calling your research done. What matters is building a process that keeps you learning constantly, something that actually sticks. 

2. You and your statistics could be wrong! So stay humble. Be aware of confirmation bias (especially if you start). Your idea/hypothesis could be very wrong. And even if your statistical analysis says you’re right, you still could be wrong. There is always a chance of false positives and negatives. So don’t count yourself too rich when you have a so-called ‘winner’.

3.  You are working on growing your product while taking away the risk of decline. This may be a more pessimistic approach to experimentation, but I believe it is the basis of what we do. We WANT to optimize our products, right? So, why don’t we just implement it then? No! ‘Because it can backfire!’. So experimentation makes sure we validate if something backfires and hurts our businesses when implementing certain changes. We first look at it like: ‘does this hurt the business, KPI’s or whatever?’. If not? Good! Now we analyse how to further approach the subject, test results, etc.

Quote graphic with the question 'What are the top 3 things people MUST understand before they start optimizing?' and the answer '1. Optimization is more about the process of learning than about finding winners. 2. You and your statistics could be wrong! 3. You are working on growing your product while taking away the risk of decline.'

How do you treat qualitative & quantitative data to minimize bias?

For qualitative and quantitative data as part of research/ideation: be careful with using these methods as a validation method to prove your point/hypothesis. Because there will always be some way to clarify your insight with deeper analysis. There is always a segment that ‘proves’ your point. Keep looking at the big picture and create a framework that works down to the most important performance indicators. For example, more revenue is never a goal. It’s a business result. Things that drive more revenue are, for example: quantity, user acquisition, user activation, order value. Do your insights contribute to these topics, and therefore to the main KPI? Or is it some random segment without any impact?

‘Kill your darlings’ if you have to. And move on. (This means that you sometimes take care of an idea like it’s your own child. But sometimes you have to let it go. Not really kill your loved ones 🙂

For statistical analysis: set parameters for your test results based on your optimization strategy. Do you feel like taking more risk will benefit the growth and velocity of your experimentation programme? Or do you want absolute minimum risk for the business? It determines how strictly you look at the numbers. Stick to parameters set beforehand, or change your strategy afterward. Also, make sure to discuss results with other experimenters, data analysts or with your CRO/experiment partners to prevent (confirmation) bias.

How can optimization teams play nice with data teams? What are the top shifts in the org required for this alignment?

Learn how the technical side of data collection is set up. Know how metrics are gathered and/or transformed. Leave no room for doubt on why certain metrics show certain data. Work together to create a measurement plan, OKRs or whatever framework is used. This way you trust the data, and the data team knows your thought process.

Then, when this is settled, make sure that insights are collected and shared with each other. Use the data teams as input for ideation or as a validation for your research. They may have insights that you can use to make a case for an experiment. When determining the impact of an experiment, then you get back to your earlier mentioned set parameters.

Quote graphic with the question 'How can optimization teams play nice with data teams? What are the top shifts in the org required for this alignment?' and the answer 'Learn how the technical side of data collection is set up. Know how metrics are gathered and/or transformed. Leave no room for doubt on why certain metrics show certain data. Work together to create a measurement plan, OKRs or whatever framework is used. This way you trust the data, and the data team knows your thought process.'

Talk to us about some of the unique experiments you’ve run over the years.

Most of the time, I define ‘unique’ by the unexpectedness of the test result outcome. So we work with a lot of e-commerce companies, and while you might have a great unique value proposition, usually the most important things are still supply, demand, price and delivery conditions. For one specific client, we thought this was indeed most important. So we ran a lot of UX tests to remove friction, add pricing tables, etc. But then experiments with sustainability, sustainable delivery, green choices, etc. gave the best results. We thought customers were solely into pricing, but it turns out there was a whole other motivator, or at least other determining factors we didn’t think about before.

Other unique experiments for us are ones with the most technical challenges to measure or even alter the variation code. There is one where we helped a product team with experimentation, but the whole product was separate from the larger technical infrastructure, so their own server-side implementation couldn’t run. So we tested a client-side solution with Convert to still be able to experiment, while still integrating in the experiment culture of other product teams, not using Convert. It shows that even though Convert made it possible to experiment on this part, A/B testing tools should never limit your overall process.

Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month. And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, Hannah Davis, Jon Crowder, Mike Fawcett, Greg Wendel, Sadie Neve, Cristina McGuire, and Richard Joe.

CTA Linkedin
CTA Linkedin
Mobile reading? Scan this QR code and take this blog with you, wherever you go.
Written By
Ruud van der Veer
Ruud van der Veer
Ruud van der Veer
Lead CRO and Data @ Happy Horizon
Edited By
Carmen Apostu
Carmen Apostu
Carmen Apostu
Content strategist and growth lead. 1M+ words edited and counting.
Start your 15-day free trial now.
  • No credit card needed
  • Access to premium features
You can always change your preferences later.
You're Almost Done.
What Job(s) Do You Do at Work? * (Choose Up to 2 Options):
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!