🏢 Company overview
Attest is a startup that helps researchers and marketeers have confidence in every decision with quick, insightful consumer research. The platform brings together a flexible survey builder, large panel audience, data quality controls and results analytics tools into a single experience allowing researchers to get quick answers to their pressing questions.
🤔 Problem statement
Customers could technically run a single market brand tracker (a way of measuring your brands' performance in a market) on Attest, however, it was a series of single surveys that they then had to analyse in excel. Their Job To Be Done is to track their key brand metrics, and their specific problem with our current solution is they cannot see the difference in brand awareness metrics over time.
This was a key problem space for Attest with a sizeable amount of Net ARR and churn risk attached to it.
👥 Users
Marketing managers at growing startups. They have some experience with research/ data tools but not much. Their key job to be done is understanding if their marketing strategy is correct via brand awareness and other metrics.
🌟 Goal
Create a product for single market brand trackers will allow marketers to easily see the difference in their key brand metrics over time and will therefore be more likely to use Attest for their brand tracker going forward (renew their subscription).
OKR:
Objective: Grow the number of customers running end-to-end legitimate multi-survey research through the new product
Key Result: Increase the number of organisations using* Trackers from 7 > 40 by the end of Q2
⚙️ Process
🕵️ Discovery
We began the project in late 2020 with a cross-functional workshop designed to challenge the paradigms of how Attest currently works. At the time our survey model looked something like this:

This worked effectively for one-time research objectives but when our users wanted to measure changes over time it required this process to be repeated and then the results merged:

The smallest and easiest change that we could make was to give users tools to make this comparison easier. We had run a few experiments in this space which had met with limited success but ultimately had been complicated and limiting for users to use since the surveys didn't 'know' about each other. The data wasn't always comparable or comparing correctly.


So we spent a week reworking the system model. If there were no rules what would we do?
We began by mapping all the objects in the system, their relationships and the data they contained. With this info in front of us we rearranged the objects, debated the relationships and built on each other's ideas until we settled on a new model:

There would be one survey that could be sent off to collect results multiple times (or have multiple 'dips') that would all feedback into. We tested this model against multiple use cases beyond the scope of what we were looking to build to ensure it would scale (how would this work for multiple markets? What about different languages?) and once we felt confident it would work set about creating concept experiences and evaluating them.
🧪 Research
With our model set, we set out to evaluate and build a solution. We began by mapping the entire experience in one story map. We used this to divide up the stories between those we felt were must-haves in the MVP experience and those that were less essential.

With our rough scope identified we collected our assumptions, questions and hypotheses for the project. These included:
-
We assume that by importing old surveys, we are making the solution immediately valuable to existing brand tracking users.
-
We assume that users will not need to make changes to their survey after sending the first dip
-
We assume that for single market trackers, users won't need to filter out waves on the Trends page
-
We assume that a default pot is sufficient for use
We prioritised these and created a series to research plans to gather insight and build confidence in our concepts. At the time our customer contact was limited so we structured each session to understand customer expectations in a given space of the experience and then asking them to complete a series of tasks in a prototype. These would often be early Figma click though prototypes but at later stages, we began introducing early versions of the code to better understand how customers interacted with real data.



We learned a large amount about the usability of our solution but also three key things about its value:
1. Editing between Dips/waves/Sends was crucial. Customers would often adjust their survey over the first few times they sent it to iron out any inconstant or less useful data.
2. Not all questions suited a line graph. Some customers sent trackers yearly leading to smaller data samples and some just preferred a bar graph view.
3. That customers need a full range of questions visualised for the solution to be useful.
💡 Solution
Trends
Trends would be the core of our solution. Here customers would be able to view the results of each send of a survey on a visual graph. They could switch between line or bar graphs depending on the question and user. Each graphs data could be filtered out based on specific demographic filters and show or hide responses depending on what the users wanted to compare.



Results
Customers could view the results of individual surveys in the traditional results view. They could switch between each send and export the data to diffrent formats.

Edits
Our initial experience only allowed users to send an exact copy of the original survey to the same audience. We had learned from our research that in more than a few cases this was too limiting and didn't allow for refinement of a tracker. So we spent some time building in a editing experience allowing customers to add new questions, delete questions and make limited changes to exiting questions content.

✨ Outcomes and lessons
👍 Validation
With our initial beta release we invited a core group of 7 customers. On full release we currently have 45 total Trackers created, 5 more than we originally targeted.
Customers responded very positively to the new feature:
Big shout out of course to whole trends squad as this functionality has made Jasper's brand tracker even more business critical than it was has added a great amount of value.
I just showed some of the new releases to a key contact at Sainsburys. Trackers was really well received as they had shared a frustration with the challenges around comparing pre/post campaign surveys and mentioned that this could help unlock them during more trend work with us.
🤓 What I learned
The importance of braking down big problems vertical slices of the experience. We did this well during the project but absolutely could have done it better.