When experimentation runs without a North Star, a single, unifying measure of success, it quickly devolves into disconnected tests, unclear priorities, and “optimise-for-the-sake-of-optimising.” The result? Lots of activity, very little impact.
A North Star metric transforms experimentation from a tactical activity into a strategic engine. Without it, even the most sophisticated experimentation platform becomes a scattergun of ideas, rather than a disciplined system for accelerated learning.
Common examples include:
These metrics are close to behaviour, close to value, and highly experiment-responsive.
A North Star is only powerful when it shapes how experiments are designed, run, and interpreted. It forces teams to think beyond isolated test outcomes and to consider how a change influences the broader user journey.

For example:
If your North Star is revenue per user, your decomposed submetrics might be:
The difference between good experimentation programmes and great ones is simple:
Good programmes deliver small wins, great programmes deliver organisational learning.
OMVP-level practitioners help organisations evolve from optimisation to directional transformation, using experimentation to validate strategic leaps, not just microinteractions.
With a well defined North Star experimentation comes:
Optimizely provides the infrastructure for experimentation, analytics, feature rollouts, and behavioural insight, but the North Star provides the purpose.