Point/Counterpoint: Exploring different experiment prioritization frameworks with Marion Ravel, Patrick Buffum and Trisha Utomi - Experiment Nation

Point/Counterpoint: Exploring different experiment prioritization frameworks with Marion Ravel, Patrick Buffum and Trisha Utomi

Point/Counterpoint: Episode 1

Home / Point/Counterpoint / Point/Counterpoint: Exploring different experiment prioritization frameworks with Marion Ravel, Patrick Buffum and Trisha Utomi

By Marion Ravel, Patrick Buffum and Trisha Utomi


The following is an auto-generated transcript of the podcast by https://otter.ai with very light manual editing. It’s mostly correct but listening to the actual podcast would be wildly more understandable.


Rommil Santiago 0:01
From Experiment Nation, I’m Rommil. And this is Point/Counterpoint, where veterans CROs, Patrick and Marion, explore different angles of all the hot topics of experimentation, listen as Trisha guides the discussion.

Trisha Utomi 0:21
Hello, everyone. So welcome to the Point/Counterpoint show. And we are going to be talking today about prioritization frameworks. But first, I’m just going to introduce our guests for today’s show. So we have Patrick Buffum.

Patrick Buffum 0:37
Hey, sure. My name is Patrick Buffum. I’ve been doing A/B testing for probably like three years now. Full time. It’s most of my role. And currently, I work for an online bank called Ally financial. They’re pretty big. They have about 10,000 employees. And we have a pretty pretty mature testing program. I think that’s about it.

Marion Ravel 1:04
And then we’ve got Marion Ravel,

Trisha Utomi 1:06
And so hi all and Marian Ravel, excuse my terrible French accent, it used not to be as strong as that. I used to live abroad. But I came back five years ago and it’s getting worse and worse. So let me know if, if, if you can understand me, but I’ll try my best. I’m working as a CRO since four years, I’ve been working for wine and spirits ecommerce website. And now I’m working for SaaS companies in accounting software called Sage, which is English one. So very different industries. The new one is less glamorous, but very much interesting. And yeah, that’s about it.

Welcome, Patrick and Marion. So glad to have you on the show. And Patrick, maybe we can go with you. First. It’s a new year. Can we learn a little bit about your new years resolutions? Sorry, that’s a tongue twister there.

Patrick Buffum 2:16
Hey, Trish, thanks for having me. I don’t think I have a lot of problems necessarily. But I do like continuous improvement. Maybe that’s why I’m in A/B testing. But anyways, one big resolution I have is to make testing history more accessible and easier to retrieve info from. Right now a lot of tests, while they live in my head, my team’s heads, people who I work with, but more across the organization, it would be you know, beneficial to know, if someone just wanted to see all tests that performed on page whatever over the past year, is that available? How easy is that for them to access this way people, you know, they are repeating tasks, that would be the worst possible thing. And then hopefully, it’ll give them ideas moving forward. I will say that maybe this is more of a nice to have thing rather than an unnecessary thing. Because unless you have that problem with people repeating tests, it could, it depends on how your systems are set up. But I could see that getting kind of complicated to tie everything together. But that would definitely be a nice to have. Yet another resolution I have is make it easier for people to self serve. When it comes to where the testing opportunities are. You know, a lot of people might have things that are very important to them, but might not be so important to the organization or where we’re trying to go it but if I can make a tool or some kind of repository where people can search, the section they’re interested in and determined kinds of traffic to that space, I the traffic volume, as well as the clicks and conversions on that space. That would really help them to choose tests that are impactful because, again, I’m hoping for big changes. But excuse me say even the change isn’t big, at least and then it would be on a page with high traffic. So we would be you know, eliminating a lot of those test ideas that might be bad, in a sense that they wouldn’t have a lot of volume or impact. But I also wouldn’t have to tell them that when they got to me, I wouldn’t have to have that uncomfortable conversation where it’s like, oh, well, you know that that idea is cool, but there’s just not enough traffic here. They would figure that out for themselves. And I wouldn’t look like the bad guy so it’d be great.

Trisha Utomi 4:46
Yeah, well, thank you for sharing. There’s a lot of I think different initiatives and goals you have planned that are definitely you know, they’re they’re gonna make a big impact. Marion, do you have, what are your goals and, you know, resolutions this year as well?

Marion Ravel 5:06
Well, actually, I can see, I can see a lot of similarities in, in Patrick, new, new year’s resolution and mine. And, and I think one of them, which is really close, is reducing the number of neutral tests. Patrick was speaking about basing A/B test on that data such as session and users, on page visits, etc. And it’s the same here we, I read recently on on one of the A/B testing tool website that 80% of A/B tests were neutral, meaning they didn’t show any improvement or opposite. So it’s basically the the A B test brought nothing to the company. Which means it made the company waste time and it has to happen, it’s normal to have neutral tests, but trying to reduce this number is very important. So to reduce this number, you have to base your AB test hypothesis on either current data or UX, UI information you collected on your site, or the lift method, which is also very interesting. So current data can be session recording, when you see someone, some some users, for instance, going top to bottom to the page, scrolling the page up and down. And then you realize that people are looking for an information which is at the top of the page when they are the bottom. So you just brings us information below and or above the fold. And then your solution is, is right there. So this is this is an A/B test that has been based on information you collected from your site. And that’s the one which are less likely to be neutral. So I’m gonna try to improve that. And I think that’s also what Patrick was saying. Another New Year’s resolution is trying to keep one goal one objective per page. Sounds simple to just say, Okay, one user come to this page, why do I want him to do what as you need to do to go to the next step. And so when you think like that, you’re trying to ask your customer to only do one single action. And sometime it’s very easy, because you want the customer to pick, the category is looking for or to pick the item he’s looking for or to whatever. But I’m working for a SaaS company, and we are selling accounting software. And we are basically offering the customer to try the product for 30 days or to buy it. And in most of my page, I when it’s not personalized, I ask the customer to choose between try and buy, which is a big dilemma for the customer and a huge leak in in our customer funnel. But at the same time, it’s something today we cannot decide whether we want to push our customer to give us a try or the buy directly. And so yeah, just trying to have one goal per page. So yeah, hopefully we have a more productive year with all this resolution.

See also  Don’t call us CROs: A panel with Shiva Manjunath, Kenya Davis, Chad Sanderson, Scott Olivares, and Jonas Alves

Trisha Utomi 9:28
Sounds like you will. Definitely heard a lot of things about refining the customer journey and just streamlining processes. Um, I guess that kind of segues into another, you know, topic that people are pretty much interested in which is product prioritization. And can you can you tell me a little bit about that? Either Patrick or Marian.

Marion Ravel 9:54
I think Patrick we are in, well, I need to understand the size of your of your team because basically, we are, on my team, it’s only two people like me and a developer. But I think your team is like, huge. So it’s gonna be interesting to compare our prioritization method.

Patrick Buffum 10:17
I would say our team is about eight to 10 people.

Marion Ravel 10:23
Okay. Yeah. Okay, interesting. Okay, so well, the method I’m using is PIE. I don’t know if it’s the best method. But I don’t want to spend a lot of time just putting a rank to make A/B test. Because after all, once it’s all started, we are in the in the daily rush, and there’s always something urgent coming up. And somehow, something we need to test as a priority. So just as of reflection, I put, I’m using the PIE method, which is Potential, Impact and Ease. So it’s basically Excel sheets or table with potential, which is the potential or the potential of the page. So it’s, it says, how important how important the page, I am going to a b test is. So if it’s a home page, or product page, it’s obviously more important than I don’t know, FAQ page, well, actually, FAQ can be very important to that transition page, or contact page or whatever. So the potential of the page is based on the user session, and conversion rates. and the value of the page obviously, Impact, which is I in my PIE, is the impact of the test. So it means how impactful will be the test should it win. And that’s basically forecasting how useful or how impactful the test will be, once once we put in production, and Ease is how easy it is to create the test. So it depends of how, technically or how technically it is difficult, or do we need designers, or do we just put it in like, a few minutes and fail. And all these three columns are numbered between zero to 10. And the and then we just rank the number with a formula to know which one to prioritize. The issue I find with this method is that it doesn’t measure the confidence I have on the test to success. But I didn’t want to add a fourth column. But it doesn’t say how confident I am on on the improve the test will have on my website, which is I think, problematic. It doesn’t take into account the office politics, it’s true that when you are in an organization, there is always something coming up that is more important than what you’re doing because it just came out. And and this usually come on top of your list and you don’t know how so you have to learn how to say no, but you also sometimes have to change your prioritization. And and that is not taken into account with my method. Also, another question cons is that the score given to this variable between zero to 10 will differ. Depending on the person’s who sometime my boss or my developer doesn’t agree with the rate I gave. And to it’s as not very precise science. So I have to admit that it can differ. But I’m curious about the method you’re using Patrick.

See also  The weekly buzz: June 2, 2021

Patrick Buffum 14:45
Yeah. Well, I really like the PIE method too but kind of at the very end there is why I don’t which is that part about subjectivity, like what is you know a 10 to you might be like a seven someone else, and things like that. And so I found the CXL‘s prioritization framework, which is basically the PIE framework just a little bit different, which I’ll go into, like, I find that to be a, that’s the best framework I’ve run into. And so like an example of that would be in PIE, we’re near talking about potential, which would be like, you know, this is a high impact change. One question in the PXL framework, as it’s called would be, is it above the fold, which is, you know, zero or one type question. And so there’s no gray area there. And there are a few. And so basically, it’s like the PIE framework. But within each of those letters are a few specific kind of binary questions, in some cases are not binary, but they’re just 0/1/2. So not a lot of ambiguity. And I know, it’s no fun to add columns to really anything. But I think in this case, it’s pretty good. I think. And you can also customize it to fit your company, which I really like. So for example, in if you visit the CXL site, you can see it, they have like a cool template for it. But let’s say one of the things on the template is ease of implementation. And then it says in parentheses, it’s like, if it’s less than four hours, give us a three, if it’s up to eight hours, give it a two, if it’s under two days, give it a one, that kind of thing. But, you know, depending on the size of your organization, like you might have a totally different scale of things. But you can just you can just fit it, change it to measure what you do. But it’s basically the same idea, but a lot less ambiguity.

Marion Ravel 17:01
Yeah. And yeah, it’s much better.

Patrick Buffum 17:03
Yeah.

Marion Ravel 17:04
What do you call it?

Patrick Buffum 17:07
It’s called the PXL. So it’s like CXL, Conversion XL priority framework. So PXL I guess that’s how they came up with it.

Marion Ravel 17:18
Okay.

Patrick Buffum 17:19
But actually, when I was doing, when I was preparing for this, I came across another framework that I thought was pretty cool that I wanted to bring up, which is, I don’t know if it’s still their framework. But a few years ago, at the CXL live conference, they had some people from hotwire.com, which is like a travel booking service, their framework, and I thought it was pretty cool. It’s, I’ll make sure we include a link at the end of this. It’s binary, just like, the CXL framework, and that there’s no there’s no subjectivity and no ambiguity. It’s either yes or no. And some things that this includes that I like is it has a column called new information, which is, does this add a new information, a new element, or remove an element from the page? If so you give it a one. But if it’s just like making a change to an existing element, like changing the color or changing the copy or the UI, then you don’t get a point for it. So it’s, that’s its way of saying it’s rewarding, like big changes, but it’s, you know, it’s not asking like, is this a big change? Because Is this a big change to the person making the change? They probably think it is, or everybody’s gonna think differently on that. And I don’t think anybody would come to me with an idea that they didn’t think was big.

See also  CR-No: How to manage stakeholders and HiPPOs

Marion Ravel 18:45
Yeah. Too subjective.

Patrick Buffum 18:48
Yeah. They also, Oh, go ahead.

Trisha Utomi 18:51
Oh, no, just how would you pick a framework for your practice?

Patrick Buffum 18:57
Um, I think it probably just depends on how like, sophisticate or I don’t know if I want to use that word, how big your company is, because on the PXL framework, for example, it’s like, like one of the columns is like, is this supported by heat maps or eye tracking? And unlike a lot of websites probably don’t have that. And that the PXL framework might be overkill, like, like I said, like compared to the PIE framework, it has a lot more columns. So a lot more questions to answer, but the PIE framework, if, if you’re a smaller company, you could probably get more similar answers as far as like how important this is. And it’s also a lot, a lot less kind of questions. So I would recommend that to smaller companies. And then as you get bigger, you can just like kind of, I would adapt the PXL framework, which is similar to the PIE one just kind of have more room to make it more sophisticated and like fit what you’re doing a bit.

Trisha Utomi 20:04
Do you have anything to add on Miriam?

Marion Ravel 20:08
Yeah, I’m just wondering whether you, you have your prioritization method on Excel, or you have like, some sort of software or like…

Patrick Buffum 20:24
Yeah, now we don’t, we don’t have any kind of software for that. I don’t, I don’t even know of software for it. But it would just be on Excel, just, you know, adding up the columns, taking averages, that kind of thing. So not nothing fancy there, I was thinking the other day, it would be cool to have like, people, you know, input their, in an ideal world, people across the organization can access this thing, where they do these ones or zeros for your framework. And then it outputs like, you know, a very easy to understand like grade of ABC. And if it’s not good enough, like, if it’s like a C, it’ll say why, that kind of thing. If it’s an A, it will be automatically sent to the testing team. I thought something like that would be cool. But who has time to make that kind of thing? So hopefully, someone else so you can buy it.

Trisha Utomi 21:23
How do you get stakeholders to agree with the results of a framework?

Patrick Buffum 21:28
Yeah, I’d say for this one. Because I’m in a bigger company. So there’s a lot of stakeholders, it’s not, it hasn’t been a big problem. I think that’s just because we make sure like when we’re when we came up with this framework, to engage the people, the most important people who we can, who are like, who are touching these initiatives, and make sure to get buy in from them. Like from the beginning, that’s super important. And then, you know, sometimes things will come up, like, we have to run this test anyway. And then that could like, give someone like me heartburn, which is kind of a jargony term that we use, which just means like, I’m a little annoyed that you know, a test scores poorly, and then we run it anyway. But that’s just kind of those situations happen. And that’s just kind of how it is to be in business. So you’ve run those anyway. But other than that, it’s been fine.

Trisha Utomi 22:30
So yeah, thank you, Patrick. Thank you, Marion, for coming onto the show. We talked about your resolutions and how you’re going to be making a bigger impact in your various organizations and and we talked about PIE and PXL frameworks and prioritization. So if you would like to learn more about those, those topics will drop the link in the description so you can find some more information on that. Other than that, thank you for joining us and tune in to the next episode.

(Transcribed by https://otter.ai)

Advertisement


Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.

You may also like

CR-No: Balancing between optimizing the entire customer journey vs. just the website

This week, the CR-No panel of CRO veterans talk about balancing between optimizing the entire customer journey vs just the Read more

Twilio’s Justin Coons shares what you need to know about SEO Experimentation

Rommil interviews Twilio's Justin Coons and they talk about: Not only can you run experiments for SEO, but you also Read more

Product Experimentation: Jaya and Siddharth explore how to Experiment during the Product Development Life Cycle

Jaya and Siddharth explore how Experimentation fits into the Product Development Life Cycle, how to work with challenging stakeholders, share Read more

Adventures in Experimentation: Eden and John answer the question, “How many experiments should I run?”

On today’s episode, John Ostrowski and Eden Bidani talk about how they respond when someone asks them how many experiments Read more