Ben Shelford
🔭
Ben Shelford

Question display logic (2021-2022)

Description: Solve a significant pain point for our customers by creating a new system to orchestrate the circumstances that certain questions are shown to respondents.

Goal: Better serve a crucial use case, prevent churn, increase adoption and close a competitive gap.

Years: 2021-2022

Role: Product designer

(Responsible for leading the project end to end: from digging into each facet of the experience, fully understanding customer needs and defining the scope of releases with engineering and Product)

Sneak peek:

Intro

Realising the problem

Researchers often need to configure surveys with multiple queries to identify weaknesses in their product, brand or proposal. Attest's initial feature was “Routes,” an effort to facilitate the creation of diverging paths in surveys.

This approach was very visual, putting approachability at its core and hiding the underlying logic of the survey flow. Customers could craft each question, link them up dependent on the answer selected and launch the survey.

Many customers appreciated this approach but, with time, we found out that for many others, instead of making things easier, it made them more complicated - too much of the logic was hidden behind this simple visual linking.

Through a series of customer interviews, we found that many customers begin their survey creation process by planning out their questions, order, and conditions in a text document.

We understood that customers were growing increasingly frustrated with the restrictive, linear route structure of traditional surveys. The lack of ability to ask a question early, and follow up later created bias and required users to spend hours attempting to craft a survey which was effective and structurally sound. This often resulted in ‘broken’ surveys which caused adoption issues and resulted in higher churn rates.

Learning and understanding

In order to get a broad understanding of the problem, we decided to take in lots of diverse inputs to guide our thinking:

  • We directly interviewed a small number of customers to build a better picture of their needs when creating paths in their survey.

  • We looked through past feedback from our existing customers which helped us to better understand their needs, and the scale of that needs.

  • We had multiple conversations with our internal Customer Research Team (internal research experts who support customers with their research objectives).

  • We played with dozens of logic builders such as Intercom Series, Apple Shortcuts, Mixpanel Cohorts and many more, trying to understand what makes all of those tools great.

  • We signed up for dozens of competing products to get a glimpse of what works in their solution and what doesn't.

Through this process, we discovered there was a spectrum of problems customers had with the current system of Routes.

  • Give a series of questions based on multiple answers to a multiple-choice question

  • Display questions later in the survey based on earlier answers

  • Logic-based on answers that were not selected by the respondent

  • Targeting behaviour based on multiple answers across multiple questions

Exploring concepts

In parallel I started to disambiguate the vision behind this feature: what's the opportunity, how it roughly could work and how it would fit into the overall Attest system.

As I worked on ideas, I made some assumptions that needed testing before we moved forward. To get feedback from a group of customers, I whipped up two prototypes in Figma. We got an idea of how customers wanted more complex logic, but we realized a more ‘realistic’ approach was needed for real feedback. So, to really understand user needs, we set up a plan to create something more tangible.

I partnered with a frontend engineer to create a throwaway prototype for our Display logic concept. Once we had a proof of concept ready, we tested it with a handful of customers to get their feedback.

As a result of our efforts, we discovered additional use cases, needs, and confusion that had been previously overlooked.

Finalising designs

Finally, having done all the things above, I moved on to exploring all the gnarly system, UX, Interactions and visual design problems.

Initial release

We made our first release of Display logic in late 2021. Due to the technical complexity of this work we kept the first release scope as small as possible. This meant initially cutting some customer needs, staggering those as follow-up releases. To help us understand which needs to work on next I set up a Buy a Feature research exercise. In this exercise, each feature is given a description and a price (that reflected its development complexity). Participants are given 'money' to spend on features they want. They would only have enough money to buy a little over half of the features. We would monitor what they bought first, and what they left behind if they chose many 'cheap' features or a few 'expensive' features.

I ran this exercise with a selection of internal research experts and customers to build a picture of what were the most important iterations. With this information, I set up a story map detailing the order we would release and the requirements for each.

This approach was useful in keeping us aligned as a team, surfacing the dependencies and leading to a rapid series of releases building on our initial work.

Measuring success

This addition to our survey creation system unlocked a range of customer needs and exceeded our adoption aim of 30% of customers in the following quarter. It also positively increased our adoption and has unlocked new possibility's for the future of the product. Most importantly the early feedback has been overwhelmingly positive.

Send Ben Shelford a reply about this page
More from Ben Shelford
Back to profile