Testing Mind Map Series: How to Think Like a CRO Pro (Part 90)

Simbar Dube
By
·

Interview with Simbar Dube

There’s the experimentation everyone talks about. And then there’s how it actually happens.

We’re hunting for signals in the noise to bring you conversations with people who live in the data. The ones who obsess over test design and know how to secure buy-in even when results are complex.

They’ve built systems that scale. Weathered the failed tests. Convinced the unconvincible stakeholders.

And now they’re here: opening up their playbooks and sharing the good stuff.

This week, we’re chatting with Simbar Dube, Conversion Research Specialist at Enavi.

Simbar, tell us about yourself. What inspired you to get into testing & optimization?

I stumbled into CRO in 2019 when I joined Invesp as a Content Editor and eventually worked my way up to Head of Marketing. Before that, I came from a journalism background, so the transition made sense in hindsight: I’ve always been obsessed with understanding people, asking the right questions and following the trail. That instinct never changed, just the environment did.

Now at Enavi, I focus on conversion research, digging into user journeys, spotting friction, and getting clear on why people don’t convert. Then turning those insights into experiments that move numbers, not just “best practices.”

Answer in 5 words or less: What is the discipline of optimization to you?

Curiosity. Tested. Proven. Repeat.

You specialize in CRO research and possess a unique marketer’s perspective. With the advent of AI, how have your research processes evolved?

The fundamentals of my research process have not changed. I still believe the job is to understand where intent is collapsing, why it is collapsing, and what is creating hesitation in the customer journey. What AI has changed is the speed, the breadth, and the amount of evidence I can work through before I even get to hypothesis creation.

Before AI, a lot of strong research work was limited by time. You could go deep, but it was harder to go wide at the same time. You might read a few hundred reviews, work through survey responses manually, sit with recordings, pull themes from interviews, and then spend days synthesizing it all into something usable. Now I can move through that stage much faster. I can process a much larger volume of customer feedback, cluster recurring objections, compare language patterns, and get to a strong first layer of synthesis in hours rather than days.

That has changed the quality of the work too. I can now bring more sources into the same analysis without slowing the team down. So instead of relying too heavily on one research input, I can combine review mining, post purchase surveys, on site polls, interview transcripts, heatmap observations, session recordings, and funnel data much more efficiently. That matters because better CRO research usually comes from triangulation, not from falling in love with one source.

The other thing AI has improved is how I explore interpretation. It helps me pressure test what I think I am seeing. I can use it to surface possible patterns, alternative explanations, missing angles, and even gaps in my own reasoning. That is useful because in CRO, one of the biggest mistakes is jumping from raw evidence to a neat conclusion too quickly. AI helps me widen the field before I narrow it.

But I want to be careful here. AI has made my process faster, not more automatic. I do not treat it as the thing that tells me what is true. It can surface signals, but it cannot replace diagnosis. Just because a model finds repeated complaints in surveys or reviews does not mean it understands which issue is the actual bottleneck, which one is affecting the most valuable users, or which one deserves to be prioritized. That still requires segmentation, funnel context, customer understanding, and business judgment.

I think my marketer’s perspective makes that especially important. I am not just looking for friction in a UX sense. I am looking for friction that affects revenue, channel quality, offer clarity, trust, product understanding, and decision confidence. AI can help me get through the evidence faster, but I still have to connect that evidence back to commercial reality.

With the advent of Al, how have your research processes evolved?

Talk to us about when experimenters should use AI, and when it is not the right move to make!

This is an interesting question because AI is no longer some separate thing sitting outside experimentation. It is already baked into almost every tool we use now, whether that is analytics, testing platforms, user behavior tools, survey tools, or research workflows. When I joined this industry about seven years ago, that was not really the case. You can even see it with Shopify putting out apps like SimGym and Rollouts. That tells you AI is no longer sitting on the sidelines. It is becoming part of the experimentation environment, and as experimenters, we have had to adapt.

I still see a lot of backlash around AI, but I think some of that ignores what is plainly true. The acceleration is real. Research synthesis, idea expansion, QA support, copy exploration, and even reviewing large volumes of customer feedback used to take days. Now, a lot of that can be done in a few hours. That matters, especially for teams that are trying to move faster without adding more headcount.

That said, I think the right question is not whether experimenters should use AI. We already are. The real question is where it actually helps, and where people start giving it too much authority.

I think AI is extremely useful for surfacing signals. It can help cluster survey responses, summarize reviews, pull out recurring objections, expand hypothesis routes, and help teams pressure test copy or implementation plans. In those cases, AI is reducing manual effort and helping people get to a stronger first draft faster. Used well, it makes a good experimenter more efficient.

Where I would be careful is letting AI move from support into judgment. There is a difference between surfacing patterns and deciding what the real problem is. Just because AI finds repeated complaints in customer feedback does not mean it knows which issue is the real bottleneck, which one matters most commercially, or which one is actually worth testing first. That still requires context, segmentation, customer understanding, and business judgment.

I also do not think AI should be treated as the final authority on prioritization or interpretation. It can help organize evidence, but it should not be the thing deciding what goes on the roadmap or what a test result means for the business. A model can help you process information faster. It cannot replace the discipline of asking whether this is truly the main source of friction, whether it affects enough users to matter, and whether solving it will move a meaningful outcome.

So my view is pretty simple. Use AI when the job is acceleration, synthesis, exploration, and execution support. Do not use it to outsource diagnosis, prioritization, or strategic thinking. In experimentation, the goal is not to generate more ideas or prettier outputs. The goal is to reduce uncertainty and make better bets. AI is powerful when it strengthens that process. It becomes a problem when it creates the illusion that the hard thinking has already been done.

Talk to us about when experimenters should use Al, and when it is not the right move to make!

You are passionate about using experimentation as a decision-making tool, Simbar. Share some tests you’ve run on offline channels, or to gauge the effectiveness of a business play.

Yep, one that comes to mind was an offline style experiment that was really just CRO thinking applied to the real world.

A retail client had just opened a new store, and we knew two things. First, nearby customers were worth more in store because average order value was higher there than it was online. Second, this was one of those products people genuinely wanted to try on first. A lot of customers were anxious about buying online because they did not fully trust that the fit or feel would be right without seeing it in person.

So the question was not simply how to drive more foot traffic. The real question was whether we could use the website to shift nearby demand into a more valuable channel that also better matched how customers wanted to buy.

We built the test around Buy Online, Pick Up In Store. We made pickup much more prominent on the site, pushed the new branch as the main option for shoppers in that area, and positioned it as the fastest and most reassuring path to purchase. The idea was to reduce the anxiety of buying online while also getting people into the store, where they were more likely to feel confident and spend more.

Then we measured whether pickup orders for that branch increased, whether foot traffic moved during the test window compared to the baseline period, and whether that local cohort showed stronger order value overall. What I like about that example is that it was not a classic on site A B test, but it was still experimentation in the purest sense. We had a business question, a customer behavior problem, a clear intervention, and a way to measure whether the play worked.

Another example is testing sales motion and stalled pipeline recovery. At agency level, I have treated stalled deals as an experimentation problem rather than a follow up problem. We tested different re entry approaches, one based on generic nurture, one based on case study proof, and another based on a pointed diagnosis of what was likely constraining growth in that prospect’s funnel. The outcome I cared about was not open rate. It was whether the account re-engaged, whether we got a serious next conversation, and whether the deal started moving again. In my experience, highly specific diagnostic outreach usually performs better than polished but broad nurture because it gives the buyer a reason to think, not just a reason to click.

Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.

And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, Eddie Aguilar, Matt Tilling, Jake Sapirstein, Nils Stotz, Hannah Davis, Jon Crowder, Mike Fawcett, Greg Wendel, Sadie Neve, Cristina McGuire, Richard Joe, Ruud van der Veer, Merritt Aho, Felipe Henrique Fogarolli, Riccardo Oricchio, Bruno Borges, Daniel Mullins, Matthew Bass, and Pieter Boonstra.

CTA Linkedin
CTA Linkedin
Mobile reading? Scan this QR code and take this blog with you, wherever you go.
Written By
Simbar Dube
Simbar Dube
Simbar Dube
Conversion Research Specialist at Enavi.
Edited By
Carmen Apostu
Carmen Apostu
Carmen Apostu
Content strategist and growth lead. 1M+ words edited and counting.
Start your 15-day free trial now.
  • No credit card needed
  • Access to premium features
You can always change your preferences later.
You're Almost Done.
What Job(s) Do You Do at Work? * (Choose Up to 2 Options):
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!