From her time at Capital One and The Motley Fool, to her current role at Nationwide, Julia Barham has a history of powering growth and customer engagement across the entire lifecycle of a product.
As the Director of an Innovation Product Team, Julia’s focus is on finding untapped opportunities for growth at Nationwide, and her team utilizes a whole repertoire of research techniques to help her do it. We spoke with Julia about the methods she uses to fuel more expansive growth, as well as how more traditional CRO programs could benefit from layering in different techniques.
Even in 2022, we see the focus of digital experimentation is often still limited to exploitive stages of the lifecycle–on optimization–whereas, your team is uniquely positioned to find new, more expansive opportunities for growth. How do you evaluate and make decisions about what new products and services Nationwide can be in the business of?
We test hundreds of concepts every year to learn about users’ needs and validate (or invalidate) problem-solution fit. We test lightweight concepts to learn if a new feature is attractive and useful to customers, and we also test higher-fidelity concepts when we’re developing a brand new product. All of them are designed to help us attract and retain customers by putting them at the center of our work.
How do you actually do concept testing–at an executional level?
Our cross-functional teams are responsible for testing concepts to identify and sharpen solution requirements. Before concept testing, the team has already identified a set of user needs to solve via primary and secondary research. We then use diverse ideation techniques to generate numerous solution hypotheses, and we prioritize the hypotheses that will generate the most critical learnings. From there, we develop the research objectives for our study and design the concepts. Depending on the needs of our study, some concepts are just written statements or low-fidelity panels, while others are high-fidelity mock-ups and clickable prototypes. Then, we recruit users and test the concepts with the rigor of an in-market experiment, gathering quantitative and qualitative feedback. (Pro tip: Always try to collect both qual and quant feedback so you can determine what worked, what didn’t work, and why.)
Prior to joining Nationwide, some of your previous roles had an emphasis on AB testing. How would your approach to optimization differ if you had access to all the different experimentation methods and tools that you’re using at Nationwide?
I would encourage all A/B testers to expand their research toolkit and leverage concept testing to think holistically about iterating your products. One change at a time isn’t always the cheapest or fastest way to make a big impact with users. Using diverse research techniques beyond A/B testing (such as Choice-Based Conjoint, Quant or Qual Concept Testing, In-Depth Interviews, Surveys, etc.) helps me deeply understand user needs, and it’s improved my product judgment dramatically.
What can a more traditional CRO program learn from the type of experimentation your team does at Nationwide?
A/B testing can teach digital professionals the discipline of designing, launching, and measuring an experiment, but it should not be the only tool in the experimentation toolkit. If your role is to test and learn, you need to use research methods that can explain what and why.The research methods vary across the product lifecycle, but the critical thinking is the same, so if you already know how to A/B test, you will be more effective when using other research techniques.
What advice would you give to a team that might be focused more on optimization, but has an appetite for incorporating more exploratory-type concept testing into their mix? Where should they begin?
The most effective way to test concepts is to hire an experienced UX researcher who understands this space, uses diverse research techniques, and applies rigor when designing research and synthesizing results. If you want to learn more on your own, Strategyzer and Just Enough Research by Erika Hall are great resources. As always, don’t underestimate the power of peer review when designing any test.
How do you determine the best research method for a given hypothesis or question you need answered?
I typically use qual research methods (ex. in-depth interviews, video diaries, open-ended questions, etc.) to first identify and scope user needs. Then I use quant methods (ex. Surveys with forced rankings, Conjoint studies, etc.) to size and validate those needs. I blend those techniques again when testing for value propositions and feature sets, starting with small scale concept testing first, followed by in-market testing next. I continue to blend qual and quant techniques as I work my way up the Product-Market Fit Pyramid, so I consistently understand what works and why or why not.
When it comes to experimentation as a practice, marketers have historically put so much more of an emphasis on optimization. When you think about the Product-Market Fit Pyramid, do you think we’re making good strides in other areas?
Yes, I love the Product-Market Fit Pyramid. I use that framework extensively to help teams and leaders crystallize which part of the product strategy they want to test and why. Optimization efforts assume your product strategy is solid, but it’s essential to question and validate the fundamentals. You can’t optimize your way to greatness if you haven’t clearly identified a target customer and the needs your product is solving.
Julia Barham is a customer-focused Product Leader with nearly 8 years of experience designing and scaling high-impact products and experimentation programs for brands like Nationwide, Capital One, and The Motley Fool.
Want to learn more about how leaders like Julia are creating growth through experimentation? Check out Widerfunnel’s latest experimentation study – be notified when it’s ready: