I design user-centred products where strategy, UX, and real-world constraints meet.
I turn ambiguity into product decisions. I connect user problems to business outcomes and design systems that reduce friction, improve clarity, and drive adoption.
Projects showcasing how I combine product thinking with design to solve real problems: navigating complex systems, building trust in AI, and creating experiences where technology amplifies human judgment.
Designing clarity in a fast-moving enterprise configuration system by eliminating silent failures

Explore Project
Designing how AI and humans share decision-making in high-stakes workflows. An AI UX case study exploring confidence, escalation, and trust in customer support systems.

Explore Project
Designed an enterprise AI knowledge assistant that makes AI reasoning transparent, verifiable, and controllable. Focused on solving the 'black box' problem by showing confidence indicators, source citations, and chain-of-thought reasoning. Used RAG architecture understanding to inform UX decisions around progressive loading, transparency features, and trust-building patterns.

Explore Project
How I think about designing AI products: making the invisible transparent, keeping humans in control, and building interfaces that earn trust through clarity.
Why do people actually need this? What decision are they trying to make faster, or what are they currently doing manually? I skip the AI and focus on the human problem first.
Identified 3.2 hrs/day wasted searching docs, became the core metric for AskAtlas.
AI is a black box by default. I design interfaces that show confidence levels, explain reasoning, and cite sources. Users need to understand why they should trust an answer.
Confidence indicators in prototypes increased user trust scores by 42% in testing.
AI should amplify human judgment, not replace it. I design for control: users can override, adjust settings, and understand what AI is doing behind the scenes.
Designed override controls reduced AI abandonment from 35% to 12% in early tests.
AI fails. Sometimes it hallucinates, sometimes it's just confident and wrong. I design graceful fallbacks and make it easy for users to say 'I don't trust this answer.'
Low-confidence warnings caught 87% of potential AI errors before users acted on them.
Users want fast answers, but accuracy matters in high-stakes decisions. I balance responsiveness with reliability, showing progress so waiting doesn't feel broken.
Progressive loading reduced perceived wait time by 63%, even when actual speed stayed same.
How confident are users in AI answers? Do they verify sources? Do they use it repeatedly? I track metrics that reveal whether AI is building trust or just replacing one frustration with another.
Tracked source click-through rates to validate that users were actually verifying AI outputs.