-
The Product
ClosedLoop is a healthcare platform that gives providers, payers, and value-based care organizations the ability to make accurate, explainable, and actionable predictions of individual-level health risks. -
The Problem
Machine learning models don't allow users to check if representative samples, such as race, are impacted by label bias. -
The Goal
Assess whether a machine learning model uses a representative sample, is impacted by label bias, or may result in the unfair distribution of resources. After reviewing each metric, you can indicate whether the model has been validated and is biased or unbiased. -
My Role
Product designer from conception to delivery. -
My Responsibilities
User-journey mapping, paper and digital wireframing, low and high-fidelity prototyping, conducting usability studies, iterating on designs, and design QA.
Insights from User Testing
User Understanding
Even though the platform has multiple tooltips to add less visual weight, users needed clear and helpful instructions on evaluating bias and fairness without tooltips.
Visual Weight
A collapsible “Settings” section was designed for users to expand or collapse. This helps users scroll less and balance the visual weight of the page.
User Confidence
Because users may interpret and understand the results of the bias and fairness validation differently, a section was added to allow users to confirm that they have personally reviewed their validation confidently.
Discoverability
To help users easily view bias and fairness for a model, a column was prominently displayed within the machine learning models page.
Takeaways
-
Challenges: Incorporating visual aids, such as charts or graphs, to accompany the bias and fairness assessment results can enhance user comprehension. However, I also had to balance the amount of material displayed to educate the user. Combining these two aspects was the most difficult, however, at the end it was possible with feedback I received.
-
What I learned: This was a balancing act due to the information regarding bias and fairness being so dense. I had to find ways for the right amount of copy to be displayed but not overwhelm the users. For example, by adding a section that users can expand or collapse when needed.
-
Teamwork: It was a pleasure to work with project managers and engineers as well as our in-house data scientists. I couldn’t have done it without their diverse skill sets and multidisciplinary expertise in these complex topics.
Status
Bias & Fairness was released at the end of 2022. Overall, the release has been a success. Users are learning about the feature and are actively leveraging it to improve the fairness of their machine-learning models. The insights and feedback gathered during this release phase will guide us in further refining the feature, enhancing its usability, and expanding its capabilities to meet the evolving needs of our users.