Started week 9 with new courses in the digital analytics Mini degree course.
The next course was about A/B testing foundations.
Lesson number one: Introduction to A/B Testing. So what are we trying to do here? Why are we doing A/B testing to begin with? And how does it work with conversion optimization? So let’s start there. Let’s imagine that these are two potential growth curves. So the blue line here is when the company growth is flat. There’s no growth whatsoever, and the orange line is when we increase our conversion rate 1% per year. That’s our revenue growth curve. Now, in this slide, we have a company that is increasing its conversion rate by 10% per year and that’s what the growth curve looks like. If we would be able to increase our conversion rate by 15% a year, not too much, our growth curve would be that yellow line. So it’s a huge difference compared to 1% increase in conversion rate versus 15%,which is a modest growth in my opinion. So essentially, optimization is compound interest for growth. That’s why we’re trying to do conversion optimization. However, A/B testing is not the same as conversion optimization. A/B testing is for validation and learning. We can do conversion optimization without any testing at all but there will be a lot of eyeballing and just like guessing what’s working or what’s not. With A/B tests, we can validate business impact through measurements. So if we ship something, we make a change on our website, and it’s working, we will know exactly how well it’s working. It added 10% to our revenue or increased cart adds by 5% or whatever it is that we’re trying to do. So it’s for validation and measurement. So we should think about testing as a measurement methodology. It does not prescribe at all what you can or cannot test. Some people say, “Hey, A/B testing is for spammy people” or there’s this saying that A/B testing is like for porn sites and that all sites that do A/B testing end up as porn sites. Well, the companies that run the most A/B tests are Google and Facebook and so on. So A/B testing is just a methodology. You might say, “Oh, well, A/B testing is risky. What if B is not better?And so we tested to lose money.” So you think that a safe alternative is to just go with A because B might be risky. But we live in a world that is always changing, so change is inevitable. It’s going to happen whether you like it or not. The environment changes, your competitors change, all that stuff. So A/B testing actually makes changes that are necessary, safer for consumers and businesses alike. You can manage your risk. If you just go with safe A, you don’t know whether you might have something that would work better or not.
Lesson three, test prioritization. So we have a number of ideas that we have generated. We have identified a problem, and then we have a number of ideas that might address this problem. So that’s the thing to always remember, even if you understand what the problem is on a given pagelike why aren’t more people adding products to the cart? You don’t know what the right solution is. And there could be many right answers. There is no one right answer, necessarily there could be. But the idea is that we have a lot of ideas for what could work and move the needle and have that impact that we want to have. And because we don’t know, we need to test. If we knew what’s going to work, we wouldn’t be testing anything to begin with. So AB testing is because we don’t know, you always have to keep that student learner mindset, growth mindset, I don’t know. I’ve been doing AB testing for more than a decade. If I had to guess which AB test hypothesis is going to win, I get it right maybe 60% of time and that’s just hardly better than flipping a coin and it is not good enough. So we need prioritization, we need to think about what do we test first, second, third and so on. Obviously in an ideal world, we could test all the things at the same time, but we are limited by our website traffic. Most of us have traffic issues, even Google contests all the things that they would like to test, they need to prioritize as well. So it’s not just a small business issue. And there are multiple testing frameworks out there. So one that is quite well known is a PIE framework. So basically what it does is that you score each test idea across three dimensions and you rate it on a 10 point scale for potential how likely this test is going to win. And this number should derive from the user research that you do. And second, you give each test hypothesis a number for importance, meaning that does it run on a high traffic page, how costly it is to build this, what is the expected ROI and so on and so forth. And then finally you rate each idea on ease of implementation. Like how many hours does it take developers to code it up? Does it take real money and so on and so forth. And so each category is up to 10 points and then you add them up. So let’s say 10, 10, and eight that’s 28,28 divided by three, 9.3.All right, so that is our best guess what’s going to win. So we’re going to test that one first. It’s a pretty easy model. The problem with this model is the numbers. The numbers are pretty subjective. Yes, you could take into account all kinds of variables here, but like, if I think, this implementation is just going to be a four but my buddy here says, no, no, it’s a seven. But we don’t discuss, like, he pushes his idea and he manipulates his numbers and then and he’s maybe more aggressive and more conservative. And so it can cause problems. And it’s basically, the subjectivity can kill this. And then the question about potentially how likely is this to win? How would I know? That’s why I’m testing. The next model is pretty much the same. This is called ice model, impact, cost, effort. This is the old school, the G off the prioritization models. This is from the nineties. And basically what it is that you’re asking a set of questions from a test hypothesis, basically. You’re asking, hey, is the impact on this test going to be high or low? If it’s low, you get zero points, if it’s high you get two points. What is the cost? Is it low cost or high cost to build this test? If it’s low, you get one point and so on. And also the effort required in terms of like human hours and so on. If it’s low you got one point. So any idea that you have can get a maximum of four points. If it gets zero points, it shouldn’t be tested to begin with. Four points means like right away, we’ll test it right away. And three points is pretty good. And then two is great then one is like well, I don’t know. So the problem with this one is like, it’s again the subjectivity of high and low.
Final lesson, testing strategies. How to choose what kind of test to run, to begin with, where do we start? So my advice here is that imagine your site is like a pyramid like this, and it’s consisting of various types of ideas. So you always want to start with the low-hanging fruits. You have an identified issue, a problem that you have discovered through a user research or any other, you know, methodology, research methodology. And it’s obvious what to do about it. It’s obvious what to do about like, people don’t get the copy. We know, and we have clarity problems by clarifying our copy and adding more detail, we can get more money. So you want to start with that kind of test where the obvious problem, a pretty obvious solution. We still don’t know which particular solution works the best, hence we need to test, but it’s pretty obvious. Once you’ve killed off all your low-hanging fruits, you know, you’ve tapped into them and there is none left. Then you can start testing like creative ideas and persuasion tactics. You know, it was like, “Oh, can we do some scarcity play here and you know, add more social proof there? And oh, what if they would log in with like Twitter? And maybe we could rub, you know, do this, yeah.” So that you move to that thing. Maybe you do some competitive benchmarking. You saw a great idea for some UX thing somewhere, and you want to test that out. You do that in the second phase. The final phase is innovative testing. So innovative testing is basically testing big swings, dramatic changes, so a site complete redesign will bean example of an innovative testing, or you completely changing your messaging, your positioning, how you present yourself, you totally redesigned your homepage, or you have a new checkout experience. That’s all innovative testing. So innovative testing has big risk. So meaning that if it fails, it can fail so bad. And I have so many worse stories of, you know, like people losing millions and millions, hundreds of millions of dollars by doing it. And also the big upsides. This is where the doubling and tripling of conversions happens, which doesn’t happen with any, I changed this small thing here. It never happens. Innovative testing is the only thing that can give you big results. So tread carefully there, that’s not, you know, where you want to start. Some common questions about testing is like, which part of the website to start with? So if it’s e-commerce, should we start on the homepage? Or you know, where? My advice, start closest to the money. So in the homepage, there’s a lot of people who are just, “Oh, what is this place?” And checking it out. And they need a lot of motivation. So people who get to your checkout, they’ve probably done the hard work of going through your product selection, choosing something they’ll like, and considering seriously buying it. Then maybe their motivation is going to be very high. So it’s much easier for you to, you know, get them over the edge there. So start closer to the money. And also your checkout page will have the highest conversion rate. So your site average may be 2%.