HARKing

Contributor

Brandon Janse van Vuuren
Brandon Janse van Vuuren,

Experimentation and Web Analytics Lead at Nawiri Group

What is HARKing?

HARKing stands for Hypothesizing After Results are Known. In experimentation, it refers to the practice of stating or retrofitting a hypothesis based on the observed outcome of an A/B test, rather than formulating it beforehand.

Instead of asking “Did the data confirm our hypothesis?”, HARKing reverses the logic: “We saw this in the data, so that must have been our hypothesis all along.” It often occurs when teams let interim results influence their narrative or stop an experiment early once statistical significance appears, even if they didn’t plan to.

While not always malicious, HARKing undermines the statistical validity of experiments and inflates the risk of drawing the wrong conclusion.

Why is HARKing a Problem?

When you base hypotheses on already-seen results, you’re not testing a prediction—you’re rationalizing a pattern you’ve already observed. That undermines the entire premise of A/B testing: to evaluate a proposed change in a controlled, unbiased way.

Here’s what can go wrong:

  • Increased false positives. Peeking and HARKing both raise the chance of concluding there’s an effect when none exists.
  • Untrustworthy analysis. Standard statistical methods assume a fixed hypothesis. Violating that assumption makes your p-values and confidence intervals meaningless.
  • Misleading launches. You might ship a feature that looks great only due to random noise.
  • Eroded credibility. HARKing can lead teams and stakeholders to distrust experimentation results.

What Leads Teams to HARK?

Common drivers include:

  • Desire for quick wins or validation of ideas.
  • Pressure to show impact from a launch.
  • Confirmation bias and emotional attachment to hypotheses.
  • Lack of understanding about how peeking skews results.
  • Vague or undocumented hypotheses, making it easy to reframe after the fact.

HARKing vs Exploratory Analysis

Not all post-hoc analysis is bad. Here’s the difference:

  • Exploratory analysis is done after the main experiment concludes. It’s about learning, spotting patterns, and generating new test ideas.
  • HARKing pretends a post-hoc insight was the original plan, treating it as if it had been tested rigorously.

If you discover something unexpected during analysis, that’s great—but frame it as a hypothesis to test in a follow-up experiment.

How to Avoid HARKing

Avoiding HARKing means reinforcing both statistical discipline and organizational culture:

Before the test:

  • Write and document clear, specific hypotheses.
  • Define test duration and sample size in advance.
  • Choose primary metrics and guardrails up front.

During the test:

  • Don’t peek at results unless you’re using a statistical method (like sequential testing) that allows it.
  • Log all changes and decisions transparently.
  • Use experimentation platforms that suppress early visibility of results or flag Sample Ratio Mismatches (SRMs).

After the test:

  • Interpret results based on the original hypothesis.
  • Clearly separate exploratory insights from validated conclusions.
  • Consider rerunning surprising results before making decisions.

Culturally:

  • Normalize failed tests and unexpected outcomes.
  • Focus on learning rate, not win rate.
  • Encourage intellectual humility and review test designs with peers or analysts before launch.

“HARKing is generally a bad idea because you don’t actually know why the uplift happened. You can venture a guess; I can explain yesterday’s horrible weather after the fact for many reasons. But if I were able to predict the weather in advance, it shows I understand something about what brings that weather about.

In experimentation, it’s better to know why a certain thing happens than to guess. I know users on my website took action X because of reason Y. To stop HARKing from happening, all hypotheses should include details about the research done to arrive at the hypothesis. That research should be documented and shared openly. Experimentation dies if there is no documentation.”

Brandon Janse van Vuuren, Experimentation and Web Analytics Lead at Nawiri Group

Start your 15-day free trial now.
  • No credit card needed
  • Access to premium features
You can always change your preferences later.
You're Almost Done.
What Job(s) Do You Do at Work? * (Choose Up to 2 Options):
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!