A/B testing and the mistakes users make.

During our class last Friday, Mrs. Suja talked about how Linkedin uses A/b testing for better product experience. That got me to here, talking more about it.

With all the talk about how linked uses it, I wondered how A/B test is generally used.
In Linkedin’s case, The team started A/B test by deciding the purpose first of using it first. (i.e To show people relevant “you may know connections” through variables like first connections and second connections)

Generally, too, you start with purpose.
Once you do that, you set metrics to set its performance, that can be how many people clicked on a particular like etc.(In linked in’s case, it was, how many people sent requests to people who were suggested to them by the algorithm and then how many accepted the same requests.)

This test can also be considered as the most basic randomised controlled experiment!
As with any randomised test, you need to estimate the sample size you need to achieve the statistical significance.

The basic appeal of A/B tests is that they are simple. And they show real time results.
Which also leads to mistakes people make in using it.

We will talk specifically about three.
Firstly, too many people (mostly managers), don’t let the test run their course completely.
Mostly because humans are impatient. This vice makes it possible that the result you believe is best is actually not and the test might give you a totally different one if it’s done completely because of randomisation.

The second mistake is of taking too many variables as metrics which can lead to spurious correlations. with so many variables, you cannot focus on what’s happening with the variable but you notice what different changes I am seeing which might be insignificant.

Lastly, enough retesting is not done. There is always a possibility of a positive error and if enough retests are done, the probability that at least one A/B test is wrong increases. That said, A/B tests are very easy and its easy to switch to another tactic if it doesn’t work! Is it still used as much as it was though is a question? Or is AI/ML killing it?
Would love to hear what you feel about the same!

Reference: https://hbr.org/2017/06/a-refresher-on-ab-testing

0

One comment on “A/B testing and the mistakes users make.”

  1. Hey, interesting read!
    As for about ML taking over A/B, I believe that this is happening but still the simplicity of A/B testing and the flexibility it offers still keep it alive .
    What do you think?

    0

Comments are closed.