## Why is Ascend changing to Bayesian statistics?

Ascend's evolutionary approach to AI provides additional information about an optimization experiment than standard A/B test techniques have. For example, there may be a very large set of candidate variations, (10's of thousands to possibly over one million), being tested, providing Ascend additional insight to the range of site visitor behavior. This behavior can include the range of conversion rates or AOV that are likely, and the long tail values. This allows Ascend to process the raw data observed to produce a more informed and accurate set of information to act upon.

Please see our short videos that explain our Bayesian Reporting in the Ascend dashboard.

## The Metrics

#### Probability to Beat Control

This metric describes the chances that a given candidate's true conversion rate is higher than control's. This does not indicate how much better the performance is, just the chance that the candidate **is** better.

It is a great metric to directly compare candidates and select the best. It helps evaluate the risk of replacing control with the candidate – directly stating the chances that the choice will have a benefit to the site. For this reason, the reporting view is sorted by probability to beat control by default, and the best candidate is chosen with this metric.

#### Expected Range of Performance

This metric tells you, for your selected confidence interval, where Ascend believes the candidate's true conversion rate will lie.

To compute this, Ascend takes into account expected behavior of the page, (to prevent being overly optimistic or pessimistic based on potential outlier behavior), and the relative age of the visitors assigned to the candidate, as a visitor that has had longer to convert will be more likely to have done so.

This metric conveys what kind of value can be expected by implementing a candidate after test completion.

#### Actual Observed Value

This value can still be found in individual candidate graphs. Although it is tempting to determine which candidate is the clear "winner" based solely on this value, doing so would be to ignore the benefits afforded by Ascend's Bayesian statistical model. This metric, when considered by itself, lacks the additional information Ascend has applied to create the metrics illustrated above.

Comparing the observed value to the expected range may have one of three different outcomes:

* 1. Observed performance is **above** the expected range:*

The current performance of the candidate is very good -- but this potentially due to a matter of luck or unusual circumstance, so the expected range is declared lower than the observed performance. As the candidate receives more visitors, if the actual performance remains higher than that of control, the expected range values will increase. If the candidate was in fact just lucky, and further traffic is more inline with control, then the observed value will drop into the expected range. In either case, with more traffic, the observed value will end up in the expected range.

* 2. Observed performance is **below** the expected range:*

The current performance of the candidate is poor – but is potentially down to being unlucky, so the expected range is declared higher than the observed performance. As the candidate receives more visitors, if the performance remains lower than control, the expected range will move down. If the candidate was just unlucky, and further traffic is more inline with control, then the observed value will move up into the expected range. In either case, with more traffic, the observed value will end up within the expected range.

* 3. Observed CR is **within** expected range:*

This can mean one of two things:

- The candidate's performance is roughly that of the experiment's average.
- The candidate has seen enough visitors converting at a higher or lower rate than usual to outweigh prior expectation of the CR range

For an overview of how Ascend uses Bayesian statistics, please visit this link.