Collecting data and actually using that data to inform your decision-making aren’t one and the same.
- If you’re not utilizing all the data you’ve collected to drive decision-making, you aren’t getting full value.
- Co-workers from other departments can share unique views at your meetings. This is one way to make sure all teams are aiming for the same goal.
- Leveraging data within a testing environment increases innovation.
Recently, my team at Pantheon took the opportunity to dive into the user experience on our pricing page. Once we decided on a tab-based layout, we started running tests. The initial results were exciting, but it did not drive the engagement that would help us meet our goals.
Instead of trying a different design, a member of the experimentation team suggested adding a data layer, which would personalize what content within the tab-based design would be featured first. By personalizing the content, we could better optimize for the engagement we wanted to see around the different plans.
We saw a 5.5% increase in clicks for the pricing plan calls to action and a 23% increase in registration form submissions (our primary key performance indicator for the page). Without this combination of creative and analytical insight coming together, our test would not have been as effective. In other words, we were leveraging agile philosophies to make more data-driven decisions.
Informed decisions begin with data
It is easy to say that you want to make data-driven decisions. You can build a data warehouse and admire all the data you’re collecting. But if you aren’t using that data to inform decisions and challenge preconceived notions about what drives value for your team and goals, it is all for naught. A Forrester report said that 60% to 73% of a company’s data does not get used for analytics — meaning that the organization is not using the information to its fullest advantage.
If we had changed only the design of the layout again because we didn’t see the results we wanted, we would be even further away from our goals. Instead, based on these results, we are about to launch a new iteration of this test with additional criteria applied to the personalization strategy.
If you’re ready to reap the benefits of data-driven decision-making, you need to open your eyes to the advantages of agile collaboration — bringing co-workers from different departments together to drive solutions. Below are strategies I have used individually and within my teams to increase the velocity and impact of testing — not to mention develop disruptive and successful products and services — by diversifying the agile process.
1. Collect fresh perspectives
To prevent creative stagnation in the marketing department, we decided to build a culture of agile testing by inviting people who serve different functions in the company to our testing meetings. I saw a way co-workers from other teams, such as design and revenue operations, could provide value through their unique points of view.
Too often, departments are focusing so hard on their own goals and responsibilities that you find a lack of collaboration — or worse, an absence of trust. Siloed groups work toward competing priorities. According to an Institute for Corporate Productivity study of over 1,100 companies, two-thirds identified collaboration as an organizational value, but only those that purposefully pursued collaboration were able to reach their business goals.
Recently, we brought a leader in our sales organization into an agile testing meeting. Hearing his priorities for growth emphasized how and why his department’s key metrics differed from our own. As a result, we added testing concepts to our task list to further refine our experimentation program.
Regularly pull in co-workers from outside your team to see what concepts they can bring to the table. This could be the secret ingredient that kickstarts a new program or revives a lagging idea. Over time, working with the same group of people will eventually have a diminishing return on investment because you will collectively start to lean on the same instincts and biases. Mix things up to keep your program fresh and innovative.
2. Define standards for measuring impact and excellence
I recently attended an Optimizely masterclass focused on establishing a culture of experimentation, and the instructors highlighted the value of creating a team mission statement complete with well-defined, measurable goals. Not only does this help everyone stay on the same page, but it also enables you to avoid wasting time on irrelevant conversations that fall outside of your program’s scope.
Monitor things such as the velocity of your testing, the rate at which you reach statistical significance, and the win rate of your tests. These metrics can be tied to your company’s North Star metrics to show your colleagues the impact of your work.
For example, you can set goals around a website’s performance metrics, such as speed, the time it takes for the page’s content to appear on the screen, accessibility ratings, or search engine optimization. We hold quarterly retrospectives to analyze our success metrics and highlight new testing ideas that we believe will have an outsized impact on the next quarter.
Leading our testing program, in addition to managing our design and web development team members, places me in a position where I need to be decisive. I often need to remove myself from the process to prioritize the needs of others over my own testing programs because we have hard deadlines that need to be met on the production end.
Furthermore, I am constantly evaluating testing ideas for their business value and feasibility. This can be politically challenging when you have to say “no” to someone. Having a mission statement makes it easier to explain why some ideas might not warrant continued iteration.
3. Set agile and changeable goals
When you begin to see the world through an agile lens, you realize just how wise it can be to move the goalposts from time to time. Because markets today are constantly fluctuating, agile marketers will re-test theories and question data results. By confirming your working theories, you will gain valuable new insights that lead to getting the best result faster.
Evaluating customer satisfaction alongside conversion rates is only one example of how using quantitative and qualitative research in your testing discussions allows all parties to make stronger decisions.
Throughout my career, I have gained lots of insight by attending sales calls and talking with the sales team to gather feedback on how our marketing programs are performing from a customer perspective. Honest conversations with loyal customers can pinpoint flaws in our marketing programs.
For example, they might highlight how a short-term win with new customers may have had a negative impact on longstanding customers — risking churn from high-value accounts. I have also seen instances when a campaign generated a high volume of leads that didn’t convert to paying customers. If we only optimized campaign spend based the conversion metric on the lead form, we would have wasted resources on campaigns that did not drive business growth.
Leveraging data within a testing environment increases innovation because it has a cascading effect. You test a hypothesis with a minimum viable product. If it works, you iterate on that experiment and get better results. If it fails, you still learn. With each test, you can uncover new hypotheses that will fuel creativity and action for your company.
Collecting data and actually using that data to inform your decision-making are two different things. If you apply these three strategies, you can diversify the agile process to boost the velocity and impact of your testing — and create successful, disruptive products and services along the way.