The Art & Science of Prioritizing Optimization Ideas

Companies embark on the conversion rate optimization (CRO) journey by identifying a solution vendor. Once done, they brainstorm on operationalizing the program. Regardless of company size, optimization programs follow a similar, cyclic path of ideation to strategy to execution and back. With the testing program in its infancy, getting ideas in and driving them to execution is not that complicated. However, there is a blessing and a curse I’ve observed with optimization. In the beginning, executives or stakeholders can be skeptical about the value add of such investments but testing usually puts the doubts to rest, however, opening the flood gates for a deluge of ideas, waiting to be tested! Add to this, you’re probably sitting on some site and customer journey data that’s been pointing to some of your site challenges which needs immediate attention.

As your optimization program grows and with it the demand for testing, you almost assume the role of a conductor in an opera. There is a lot of coordination needed between the data-driven opportunities identified and balancing those ideas against company priorities and stakeholder needs; and that’s the “art” part of it. So, how do you go about prioritizing multiple ideas?

To know where to start you should have all data points in front of you to determine what level of effort will go into a test project. A typical testing process follows the below steps (blocks in grey hold good for larger organizations whilst those in orange scale well for their smaller counterparts):

 

Keeping this process in mind these are some of the questions you will need to answer to get a better sense of how to prioritize multiple test ideas:

  1. How does the idea align with our company, business unit and yearly goals (or the financial period you manage your business with)?
  2. What’s the level of effort (LOE) and complexity that these tests present? This is a very modular question. To gauge the true level of complexity you must consider the design resources required to design that test. The more user experience changes to be incorporated, the more design time you’ll need. Once done, how hard would it be to develop this test? This calls for a collaboration with your IT team to understand the complexity of your test ideas. Some may be very simple to develop while some may require couple weeks of development time. Also, consider the QA and implementation plan. This provides multiple data points to lay on the table and start making decisions.
  3. What’s the potential revenue impact (PRI)? Let’s say you now have a list of tests with clearly defined LOEs. Out of that list, assume ten ideas are at the same LOE. What would be the next level of data needed to further prioritize? It’s time to consider the potential revenue and/or user experience impact. You may have a high complexity test, let’s say a test on cart and checkout at an ecommerce site funnel. It may be a risky area to test and may require much development time. On the flipside, a site visitor that can complete C&C experience is direct revenue impact. LOE bounced against PRI will get you the most optimized CRO program list.

So, how does all of this work in real life? What kind of prioritization approaches do other companies follow? I reached out to a few CRO experts to get their perspective on prioritization:

Keith Swiderski, Director of eCommerce Marketing at Avis, shared that they run about fifty tests a year currently but quality of tests trumps the quantity. To get to highest quality tests, they prioritize ideas on a weekly basis. They consider ease of implementation along with potential future revenue. He added that: “Occasionally a test will be more difficult to implement than expected, in which case, unless the expected revenue is substantial we might de-prioritize it.”

Scott Olivares, of LinkedIn, drives business growth through effective use of CRO programs, which are mature enough to host over thousands of CRO projects including testing of products, features, assumptions, and targeting. To them, also, quality comes first, especially because they may not always have enough resources to test all the ideas. Thereby, exploratory research and analysis to find meaningful opportunities is a significantly time-intensive exercise. They tend to prioritize experiments where they project a big upside and have relatively low LOE (little to none engineering required). They also prioritize strategic tests that may mean a shift in their operating procedures – these often require engineering. After that they prioritize big opportunities that have a big LOE that always require engineering. They let go of tests where a small opportunity is projected for judicious use of time, allocated in other areas. Quoting Scott, “We are constantly looking at our testing priorities. It’s all based on opportunity vs. LOE.”

At Dun & Bradstreet, Merritt Aho, Director of Testing & Optimization, does a great job of balancing site needs and business goals in a carefully planned test roadmap. While setting up the long-term goals for six-months at a time, his team would go back to the drawing board and re-evaluate every two weeks if there are any major changes to the business priorities. He also makes a great point on the “shelf-life” of tests; if you had placed a test idea on the roadmap six months ago and due to changing priorities it fell to the bottom of the list, it would be wise to reevaluate the test concept and determine if it still deserves the design and development time that will be put against it.

You just read what some of the leading subject matter experts in CRO think about prioritization strategy. I hope this article shines a light on prioritizing test ideas and how best to approach it. It can be a challenge but can be managed once you have these fundamentals down. I’d be happy to brainstorm with you on how best to define your company’s prioritization process.

Nazli Yuzak, Director of Site Strategy & Optimization, iQuanti