Testing Mind Map Series: How to Think Like a CRO Pro (Part 6)

Jakub Linowski
By
November 17, 2021 ·
How to Think Like a CRO Pro (Part 6)

Interview with Jakub Linowski of GoodUI

Have you ever wondered why some people just seem to have a knack for CRO?

It’s not just because they are good at setting up experiments. They also know how to think about things in a different way, which is what this series will help you do as well. We’ll take a close look at the mindsets that make up successful CRO and how you can apply them to your strategy.

If you can get yourself into the right frame of mind, then success will come much easier than if you try without first understanding where your blind spots might be. Ultimately, the Testing Mind Map Series is meant to help you better plan out your optimization strategy and execute tests with more confidence!

In this article, Jakub Linowski of GoodUI shares that the power of experimentation lies not only in its ability as a useful method but also for generating powerful insights that can inform better decisions.

Jakub, tell us about yourself. What inspired you to get into testing & optimization?

I was drawn to the world of experimentation sometime around 2014 when my design background led me down to this path. As people started sharing examples of good UI and “best practice” lists, so did I, and that’s how GoodUI.org came about. It didn’t take me long, however, to realize that all my suggestions and UI patterns were closer to visual hypotheses rather than anything backed by evidence. I really wanted to gain more confidence and do a better job at filtering the good ideas from the bad ones.

So when I heard about A/B testing, it got me quite excited (even though I had no idea what a confidence interval was). I hired a front end developer and we kicked off a little optimization agency. We started testing whatever ideas we read on blogs, heard from our clients, as well as from our own emerging GoodUI patterns library. With most of our clients allowing us to publish a/b tests openly, it started becoming apparent that some patterns were better than others. Some didn’t do much. Others replicated well. And others resulted in negative outcomes.

We now needed to weigh these experiments.

And so GoodUI.org quickly started turning into a repository of similar and comparable experiments with a full circle feedback loop. Things that performed better with a higher frequency and impact were surfaced to the top (using median aggregate data). While test outcomes for similar patterns were fed back into our database, correcting our predictions and increasing accuracy.

So yes, I enjoy experimentation for both the wonderful method it is, but also as a powerful source of professional knowledge that enables us to make better predictions.

How many years have you been optimizing for? What’s the one resource you recommend to aspiring testers & optimizers?

We ran our first leap a/b test back in May of 2014 on a quote landing page for a major insurance company. The variation included everything we knew at that time about improving forms, copy and lead forms based on our own limited experience. The outcome was a relative +53% increase in leads (±28, p-val 0.0002). This is my first experiment that got me hooked.

As far as a resource, I love learning from what others are testing. It is especially exciting and valuable to look up experiments from big players like Netflix, Airbnb, and Amazon that we know have a good sample size and run lots of tests. Overall, I think it’s always a good idea to learn from people a couple of steps ahead of us (as suggested by many, including inside Mastery by Robert Greene). 🙂

Answer in 5 words or less: What is the discipline of optimization to you?

Optimization means we’re improving things.

(Outcomes are critical to optimization. For example, a hundred flat or undesirable experiment outcomes are not good enough. You might learn a ton, yes. But in order for us to optimize something, we need to move the needle in the direction we want.)

What are the top 3 things people MUST understand before they start optimizing?

EXPLORATION – generating as many ideas as possible.

EXPLOITATION – prioritizing ideas with past results for greater speed.

EXPERIMENTATION – opening our ideas to be falsified or validated.

How do you treat qualitative & quantitative data so it can tell you an unbiased story?

I do agree with the idea of validating a/b test results. In general, the more measures we have that are coherent, the more reliable and trustworthy our experiments can become.

When it comes to comparing results, there are a few ways we can do so:

  • Comparing Multiple Metrics From The Same Experiment (e.g. consistency of effect across: adds to cart, sales, revenue, return purchases, etc.)
  • Comparing Historical Data Across Separate Experiments (e.g. consistency of effect between two separate experiments ran on 2 separate websites)

What kind of learning program have you set up for your optimization team? And why did you take this specific approach?

I strongly believe that experiment replication is a critical element in getting better at predicting test outcomes (generating professional knowledge).

Hence, in our own platform, we group similar experiments and aggregate similar metrics.

When building a knowledge base from experiments, the other important thing is to minimize publication bias. That is, keeping a record of all experiments independent of their outcomes (including positive, negative, significant and insignificant ones).

What is the most annoying optimization myth you wish would go away?

Most recently, I’ve been annoyed by people claiming that experimentation has no downside (captured nicely by this wonderful LinkedIn thread). A subtle way this sometimes comes out is through statements sounding similar to “there are no losing tests, only learnings”.

This might be true in ivory tower worlds where learning is the key goal and where the experimenter is protected from costs.

However, as a profession, when we use experimentation as a tool for optimizing clients’ websites, there is no free lunch. Running experiments comes with its costs, risks, downsides and upsides. From this angle, I think it’s extremely healthy to track and admit the outcomes for what they truly are (including being comfortable admitting to streaks of negative tests and not whitewashing them). All professions need both positive and negative feedback loops to keep getting better.

Jakub Linowski Infographic

Download the infographic above to use when inspiration becomes hard to find!

Hopefully, our interview with Jakub will help guide your conversion strategy in the right direction! What advice resonated most with you?

Be sure to stay tuned for our next interview with a CRO expert who takes us through even more advanced strategies! And if you haven’t already, check out our interviews with Gursimran Gujral of OptiPhoenix, Haley Carpenter of Speero, Rishi Rawat of Frictionless Commerce, Sina Fak of ConversionAdvocates, and Eden Bidani of Green Light Copy!

Convert Academy
Convert Academy
Originally published November 17, 2021 - Updated April 01, 2024
Mobile reading? Scan this QR code and take this blog with you, wherever you go.
Authors
Jakub Linowski
Jakub Linowski Jakub Linowski publishes hundreds of a/b test results to help optimization teams to desirable outcomes.
Editors
Carmen Apostu
Carmen Apostu In her role as Head of Content at Convert, Carmen is dedicated to delivering top-notch content that people can’t help but read through. Connect with Carmen on LinkedIn for any inquiries or requests.

Start Your 15-Day Free Trial Right Now.
No Credit Card Required

You can always change your preferences later.
You're Almost Done.
I manage a marketing team
I manage a tech team
I research and/or hypothesize experiments
I code & QA experiments
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!