Inconclusive Results
Contributor
Sr. Manager, Customer Research and A/B Testing at Tractor Supply Company
What Are Inconclusive Results in A/B Testing?
Inconclusive results happen when your experiment finishes and none of the key metrics reach statistical significance. That means you can’t confidently say whether the change helped, hurt, or did nothing at all.
It’s not the same as proving there’s no effect. It just means the data didn’t provide a clear signal. You might be underpowered, or the effect might be too small to detect with the test setup you used.
Ambiguity is normal in experimentation. An inconclusive result doesn’t equal failure—it just means the question remains unanswered.
Why Experiments Produce Inconclusive Results
There are several common reasons:
- Too little traffic or runtime
- Effect size too small for your test to detect
- The change had no real impact
- Poor hypothesis or unclear goal
- QA or implementation issues like tracking bugs or Sample Ratio Mismatch
- External noise during the test period
- Multiple variables changed at once, making attribution hard
- Too many metrics tracked, increasing noise and confusion
“When a test has no KPIs that have reached confidence, the test is considered inconclusive.
When metrics don’t reach confidence, you don’t know how the test will behave in the wild if you roll it out. It could have a positive effect, it could have a negative effect, or it could do nothing.
At best, this means your customers were ambivalent about the changes; at worst, the changes were too small for customers to notice. When you have inconclusive results, the best thing to do is to review the qualitative data to try and determine why customers reacted the way they did, iterate based on their feedback, and retest. If tests are inconclusive due to low traffic, then you need to expand the test audience and re-run.”
Kathleen Kintz, Sr. Manager, Customer Research and A/B Testing at Tractor Supply Company
What to Do With Inconclusive Test Results
Avoid the urge to overinterpret. Instead:
- Revisit your hypothesis. Was it grounded in data or guesswork?
- Review QA and implementation. Any bugs, mismatched goals, or sample issues?
- Analyze segments. Maybe the change worked better for certain user types (e.g. mobile, returning visitors).
- Check your test’s power. Did you have enough users and time?
- Bring in qualitative data. Use surveys, feedback, or user sessions to see what people actually noticed or cared about.
- Decide whether to re-test or move on. Not all ideas are worth repeating. But some are—if the data shows promise or your setup was flawed.
When to Re-Test vs. Move On
Re-test if:
- You were underpowered but see directional movement
- You found segmentation insights worth exploring
- There were known bugs or external disruptions
- You can iterate quickly at low cost
Move on if:
- The test ran cleanly but showed no meaningful effect
- The idea lacks user salience
- You’ve tested similar ideas repeatedly with the same flat outcome
- You have stronger, higher-upside ideas to prioritize
Best Practices to Reduce Inconclusive Tests
- Plan for enough statistical power
- Choose clear KPIs that align with user behavior
- Run QA across devices and segments
- Keep variants simple to isolate effects
- Use confidence intervals—not just p-values—to interpret results
- Track and share learnings from every test, including the inconclusive ones