
unBoxML
Project at MathWorksAdvised by: Heather Gorr, Hans Scharler, Claudia Wey
Conversations around bias in data science models are more convoluted than the algorithms themselves. A topic of crucial importance in the world of data, I decided to create a project that would allow MathWorks to break down and simplify discussions about data science biases. This project simplifies building a data science model through a gamified interface, with a main goal to highlight what causes bias in artificial intelligence. As features are selected by the user to specify for example “What makes a great software developer at MathWorks?”, two scores show up on a “Moral Compass” that indicate the level of accuracy and the level of bias. The goal is to always increase accuracy and decrease the bias, and forces the user to ask: how to make the tradeoff?
Role: concept, interaction design, interviews, visual design

Mix of layout and visual iterations
User Flows
Feedback session withh fellow designers

User Flows


Homepage

Level 1

Level 3
As users go through 3 levels, the difficulty increases exponentially and makes users aware that they should’t underestimate the underlying layers that cause bias.
Introduction demo
The game starts with a demo that explains how to play.
Adding Features
Users can add features from a list that pops up
Reordering Features
Users can reorder features according to 4 levels of priority.




