Background

Appari is a gen AI try-on app that helps shoppers visualise clothing on themselves before buying. I led the end-to-end design of the MVP from user research and strategy through to a live product with 250+ early users and two retail partners signed for pilot testing.

The goal was to reduce purchase uncertainty, lower return rates, and give users confidence in their online fashion choices through a simple, mobile-first experience.

Role
UX Visionary

Link
appari.ai

Medium
Mobile App & Website

Tools
Figma, VS Code, ChatGPT, Claude, Vue.Js, Tailwind

A wallet that cares about your investments

What's being solved

Shopping for clothes online is convenient until it isn't. Appari set out to solve common, frustrating problems shoppers face:

Issues

  • Uncertainty about fit: Users can't tell if clothes will suit them
  • Inconsistent sizing: A "medium" in one store might be a "small" in another
  • Lack of visualisation: Product images show professional models, not real people
  • High return rates: Users buy multiple sizes and return the rest, wasting time and money
  • Decision fatigue: Without previews, shoppers abandon carts or avoid trying new styles
  • Low confidence: Some users avoid online fashion entirely due to uncertainty

This leads to customer frustration, lower purchase confidence, and costly return rates for retailers.

The challenge: How might we help users see how clothing will look on them before they buy - simply, quickly, and visually?

An AI fitting room

A wallet that cares about your investments

Goals

The goal was to build an MVP that would allow users to upload a selfie and try on clothing virtually using AI. It needed to be intuitive, frictionless, and visually compelling — without requiring complex onboarding or body scanning.

Key objectives:
  • Create an AI model for the user
  • User tries on clothes from any online store
  • Help users make faster, more confident purchasing decisions
  • Validate interest from both consumers and retail partners
  • Consumer usability
Success metrics:
  • User sign-ups and beta engagement
  • Time to first try-on completion
  • Referral engagement
  • Pilot interest from fashion brands
  • User satisfaction and feedback validation

Final Design and Outcomes

  • 250+ email sign-ups within three weeks via landing page
  • Launched full working MVP on mobile (React + Tailwind + Fashn API)
  • 87 beta users
  • Referral programme added with tiered rewards
  • 60% try-on completion rate for first-time users (within one minute)
  • Two retail brands signed up for B2B pilot after seeing demo

AI Model

  • Upload a full body selfie
  • Camera/Gallery Integration
  • A gallery of models to test with

Try on Clothes

  • Upload clothing screenshot or paste a product link
  • Choose from demo outfits
  • Gen AI remodels the clothes on the users body
  • User views results and makes a decision

In App Growth

  • Share outfit with friends for feedback
  • Refer friends to earn free credits
  • View your outfit history and saved looks

Case Study

User Research

I conducted research with 340+ participants through interviews and surveys, uncovering key insights:

Consumer pain points

"I usually buy two sizes, try them on, and return one. It's wasteful"
"If I could see how it fits before buying, I'd shop online more often"
"I want to try the dress before ordering"

Competitive Analysis

I analysed existing solutions including Zyler, True Fit, and various AR try-on tools that had been made.

Key gaps identified:

  • Most solutions are single-platform only or require complex setup
  • Limited cross-retailer functionality
  • Poor mobile experience
  • No social integration

Opportunity

Design a mobile-first, cross-platform solution that works with any retailer.

UX Strategy

Core user journey:

  1. Import clothing item from any retailer or social media
  2. Upload selfie (or use saved photo)
  3. See clothing modelled on their AI twin
  4. Share with friends or proceed to purchase

Strategic design principles:

  • Universal: Try on clothes from any brand or store
  • Simple & instant: Mobile-first, under 40-second experience
  • Human: See yourself, not models who don't look like you
  • Social: Built-in sharing to get feedback from friends

MVP Prioritisation

I prioritised features for the initial launch based on user needs and technical feasibility:

Must-have (Phase 1):

  • Basic try-on with photo upload
  • Cross-retailer functionality
  • Mobile-optimised interface
  • Social sharing capability
  • Freemium credit system
  • User AI Model

Information Architecture

Primary flow:

Product Design

Initial core feature wireframe concept

1. AI Model/Digital Twin

To create the most accurate digital twin.

a -  Scan face & body with camera or upload images
b - Input body measurements for accuracy


Discussion with potential users:
- Too long winded, too many steps. Needs to be simpler

2. Select Clothes & Try on

a - Option to input link or screenshot/image of the items.

I added images/screenshots as clothes can be captured from influencers and images friends share.

b - Clothing is remodelled on the user's digital twin.

Feedback:
This is simple and straight forward.

Design version 1

I decided for the initial concept, we should see if people actually can integrate this type of tool into their shopping flow as it's a new way of doing this. I decided to simply the AI model to just a full body selfie as Gen AI is able to work with that.

Onboarding & AI model

a - Simple onboarding with Google login

b - Single full body image with instructions for best output.

c - Users can take a photo or upload something from their camera roll


Try on an outfit

c - Removed url paste for simplicity, added instructions for garments so the gen AI can be more accurate

d - A preview of the outfit is shown so user can change it if they selected the wrong image

e - Garment type is put in so more accurate results can be achived

f - Final output screen

Launch & Feedback

Round 1 - Prototype testing (4 users):

  • Tested Figma prototype with key user flows
  • Identified confusion around photo upload requirements
  • Additional feedback from UX GPT
  • Issues around credit system and subscriptions

Round 2 - Beta launch (87 users):

  • Real product testing with live AI generation
  • Strong validation for core concept
  • Feedback on load times and quality expectations
  • Errors
  • Trust issues around body selfie

Key Findings & Iterations

Finding: Users were hesitant to upload personal photos immediately

Solution: Added demo models and outfits to explore first

Finding: AI image generation did not perform to good standard

Solution: Added 'regenerate' option without charging an additional credit

Finding: On first use, people didn't have clothes to try

Solution: Added a gallery of clothing to test with

Finding: Credits were too expensive

Solution: Reduced pricing and increased number of free credits

App in action

First MVP demo

Other Screens

Saved Clothing

Goal: People can look at their saved clothing for when they make buying decisions, write notes and share them with others.

Credits & referral

Goal: User is able to do basic account tasks and look at their outfit credits, buy more and refer friends.

What's Next

Based on user feedback and research, the next design priorities include:

  • Sizing recommendations: Visual feedback on fit (too tight, perfect, loose)
  • Improved personalisation: Body shape adjustments for accuracy
  • Social features: Community lookbooks and style inspiration
  • Smart styling: AI-powered outfit recommendations
  • Enhanced sharing: Integrated social commerce features

Long-term vision: Create an experience where people can visualise themselves in any outfit from any retailer, building confidence and reducing the friction of online fashion shopping.