Testing Mind Map Series: How to Think Like a CRO Pro (Part 83)
Interview with Merritt Aho
There’s the experimentation everyone talks about. And then there’s how it actually happens.
We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to get buy-in when the numbers aren’t pretty.
They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.
And now they’re here: opening up their playbooks and sharing the good stuff.
This week, we’re chatting with Merritt Aho, Digital Analytics at Breeze Airways.
Merritt, tell us about yourself. What inspired you to get into experimentation?
I signed up for Adobe’s digital analytics competition while I was in school. They gave us access to Backcountry.com’s data and asked us to procure insights and make recommendations. As my partner and I poured over the data and developed our recommendations for the “client”, I discovered a real love for the craft of tying data to UX in pursuit of better experiences for customers and growth for the business. Everything I’ve done since has been some variation of the theme. When I entered the job field, I was fortunate enough to get hired doing A/B testing and quickly saw how exciting it was to run experiments and see very quickly how (usually not) well ideas worked. I became addicted to it.
How many years have you been testing for?
Since 2011. Not sure how many thousands of experiments I’ve been involved in, but it never really gets old.
What’s the one resource you recommend to aspiring testers & optimizers?
Find a community of like-minded professionals dedicated to the craft. I really like TLC. I really like several of the conferences and the communities that have built up around them: Conversion Hotel and Experimentation Island come immediately to mind, but I’d also be remiss if I didn’t mention CXL Live, which opened a lot of doors for me personally.
What are the top 3 things people get wrong about experimentation?
I’m going to list one big one, which is that people seem to generally misunderstand where the value comes from in experimentation. If you have a great idea and you know it’s great and you test it and it wins, well, I hate to tell you but you just lost money testing that thing when you could have just rolled it out sooner and benefited from all that green! People tend to attribute weird value to the experiment itself when things win. I think the only time this is the case is when that idea would never have seen the light of day if you didn’t have the ability to test. In other words, your testing capability emboldened you to try something you wouldn’t have. In those cases, celebrate the win all day! But in my experience, those are rare cases.
Most of the value, I think, comes from saving you from your bad ideas. And I still haven’t met anyone who has mostly good ideas. If they do have mostly good ideas, they should really stop wasting their time with experiments. This value story is really difficult to communicate to stakeholders. Ironically, that misunderstanding usually results in more glory for the experimentation team. People associate us with their wins. While there’s a negative halo around us when ideas lose.
Aside from that large and pervasive misunderstanding, I’d say people often fail to appreciate that the optimal experimentation practice looks very different at different organizations. Experimentation is all about increasing the fidelity of your decision making information and for a lot of orgs, it makes a lot of sense to make decisions with low fidelity info. There’s a range that’s appropriate. So people shouldn’t necessarily get down on themselves (or look down on others) for doing things that are less sophisticated when that might make perfect sense for their org and scenario.
AI is everywhere! How are you weaving AI into your experimentation workflows?
I use AI in my work every day. I’d say it’s indispensable at this point. I use it more for technical work than anything: writing code, SQL, shortcutting analysis in Python notebooks and data viz. Cursor, for example, is phenomenal IMO. Most disappointing remains trying to use Gemini in BigQuery with my GA4 data or trying to use MSFT Copilot in PowerPoint. Useless. But I think the next year or so is going to see some huge transformations in our org enabled by AI and I love where things are headed.
Your profile is unique. You’ve been a hands-on practitioner as well as a C-suite strategist. From this vantage point, how can experimenters speak the language of the C-suite?
Leaders have to make a lot of decisions and there’s a bounty on making them quickly and taking calculated risks. Too many experimenters are actually risk averse or not able to use data effectively where there are higher levels of ambiguity. I think being aware of this and flexing your approach to experimentation to different risk/speed/reward dynamics can make you a more valuable resource to the C-suite.
Talk to us about some of the more ‘out there’ experiments you’ve run over the years?
Ha. I’m not sure I’ve done anything truly out there. I’ve proposed many wild ideas and I like to do so just to get people thinking about what-ifs, but most of them lie on the cutting floor. I’ve run a few “switch back” tests with our authentication provider. These are quasi-experiments and a pain to manage and analyze but kinda fun to do something different. I guess there was one time where we tested moving a massive website to a new domain, design and platform. This site was an absolute behemoth for organic search, so the stakes were very high. Not to mention the fact that nearly every redesign I’ve ever seen tested has failed out of the gate. We conducted the test with server-side redirects and it was legit A/B with pretty sophisticated instrumentation. Best thing about it was it was actually a very successful test financially.
Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.
And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, Hannah Davis, Jon Crowder, Mike Fawcett, Greg Wendel, Sadie Neve, Cristina McGuire, Richard Joe, and Ruud van der Veer.
Written By
Merritt Aho
Edited By
Carmen Apostu


