2. Time-to-Confidence estimates: how long until we will have a credible interval of 5%? 1%? 0.5%?
These can help you cull variants and experiments. Sure, it's optimized so you don't have to for the sake of the test, but code cleanliness is nice to have.
I will add that a couple great things for a dashboard using MAB are
1. Pairwise variant comparisons: i.e., P(A>B) for each pair. http://www.evanmiller.org/bayesian-ab-testing.html is an amazing resource on that.
2. Time-to-Confidence estimates: how long until we will have a credible interval of 5%? 1%? 0.5%?
These can help you cull variants and experiments. Sure, it's optimized so you don't have to for the sake of the test, but code cleanliness is nice to have.