A review of the consultation flow, results experience, and post-prescription onboarding
Claro Skin's consultation flow is well-designed at the surface level — the quiz is clean, the inputs feel manageable, and the transition to a results page is fast. The breakdown happens at the moment that matters most: when users receive their formula and need to understand why it's right for them.
The results page delivers a prescription without delivering comprehension. Users receive ingredient names without plain-language explanations, a treatment plan without a timeline, and a product without the "why this, for you" story that would make them confident enough to commit. The education resources that could bridge this gap exist — they're simply buried three clicks away from the moment a user needs them most.
The consequence is measurable: App Store reviews and Reddit signal a pattern of cancellations within the first 4–8 weeks, consistently citing confusion about whether the product is working or appropriate. This is a comprehension problem, not a formulation problem.
This Legibility Audit uses three complementary methods to surface comprehension gaps — no participant recruitment required, results in two weeks.
Each finding includes the observation, supporting signal, and a specific recommendation. Findings are ordered by severity.
After completing the 12-question consultation, users land on a results page showing three product names and four ingredients. There is no sentence — anywhere on this page — that says "we recommended this because [your specific concern]." The formula appears authoritative but unexplained, like a prescription pad without a diagnosis conversation.
Users who don't already know what "tretinoin" or "azelaic acid" does have no path to understanding why these were selected for them specifically, versus any other user who completed the quiz.
"I got my results but I have no idea why they gave me this specific combination. It just feels random." — 2-star review, repeated 14 times in varying forms across 90-day review window.
Add a single "Why this formula for you" section to the results page — 2–3 sentences that explicitly connects the user's quiz answers to their specific ingredients. This doesn't require redesign; it's a copy and logic problem.
Example: "Based on your concerns about acne scarring and sensitivity, your formula includes azelaic acid (which fades post-acne marks without irritating sensitive skin) rather than stronger actives like tretinoin."
The results page, the confirmation email, and the product packaging all omit a timeline for expected improvement. This is a known problem in prescription skincare: tretinoin and azelaic acid typically require 8–12 weeks to show meaningful results, with a potential "purging" phase in weeks 3–6.
Without this context, users who see no improvement at week 4 — or who experience purging — have no framework to understand what's happening. Reddit shows them drawing the rational conclusion: the product isn't working.
"Tried Claro for 6 weeks, my skin actually got worse at first. Nobody told me this would happen. Cancelled." — cross-posted in 3 threads, 47 upvotes.
Add a "What to expect" timeline — a simple visual or short-form explanation that covers: weeks 1–3 (adjustment), weeks 4–6 (potential purging, this is normal), weeks 8–12 (visible improvement begins). This should appear on the results page and in the onboarding email sequence.
Setting accurate expectations is the single highest-leverage retention lever for prescription skincare. Users who understand the purging phase don't cancel — users who don't, do.
A count of the results page and first onboarding email finds 11 clinical or technical terms used without definition. The quiz itself uses "seborrheic dermatitis," "post-inflammatory hyperpigmentation," and "comedogenic" — terms that require dermatological literacy to answer accurately.
This creates two failure modes: users who don't know these terms answer inaccurately (reducing formula precision), and users who receive a results page full of clinical language feel they're receiving a medical document, not a personalized recommendation.
Audit all clinical terminology in the quiz and results flow. For each term: either replace with plain language, add an inline tooltip/definition, or add a parenthetical — "retinoid (a form of Vitamin A that speeds up skin cell turnover)".
The goal isn't to dumb down the science — it's to make users feel like the formula understands their skin, not like they're reading a chart note. Comprehension builds trust; clinical opacity erodes it.
Questions like "How would you describe your skin type?" and "Do you currently use any retinoids?" require a definitive answer. A significant portion of Claro's target users — people new to prescription skincare — don't know their skin type in clinical terms and haven't used retinoids before.
Forced-choice questions on topics of genuine uncertainty cause users to guess, then doubt the recommendation that follows. "I wasn't sure how to answer some questions, so I wonder if the formula is actually right for me" is a trust collapse before the product arrives.
Add "I'm not sure" / "I don't know" as a valid quiz response where relevant. When selected, the product should acknowledge this and explain how it handles uncertainty — e.g., "No problem — we'll start with gentler actives and adjust based on how your skin responds."
This turns an anxiety point into a trust moment: the user learns the system is designed for people who don't have all the answers, not just dermatology enthusiasts.
The ingredient glossary at claro.com/ingredients is genuinely useful — it explains each active, why it's used, and what to expect. A "How your formula is made" guide is also available under /learn. Neither is linked or referenced from the results page, confirmation email, or product packaging.
The educational content was likely built to drive SEO or for curious users who go looking. It's not surfaced at the moment of need — when a user receives a formula they don't yet understand.
Surface these resources contextually. On the results page, link each ingredient name to its glossary entry. In the first onboarding email (after product ships), include a "Before your formula arrives" section with a link to the what-to-expect guide.
This is a positioning win as much as a UX win: it signals that Claro is a brand that wants users to understand their skin, not just sell them a product.
Three of the five findings are low-effort, high-impact copy and content changes. They don't require engineering resources or design overhaul — they require someone to write better explanations.
The three low-effort findings can be addressed in a single focused sprint without engineering involvement. Here's a suggested path forward:
Week 1: Share this report with the team. Assign Finding 01 and 02 to a copywriter or PM — they're writing tasks, not design tasks. Finding 05 is a link-placement decision that can be made in a day.
Week 2–3: Run a quick usability check on the updated results page with 3–5 users to validate that the new "why this formula" explanation lands as intended before shipping more broadly.
Sprint 2: Tackle the terminology audit and quiz uncertainty handling together — they benefit from the same content strategy thinking.
I'm available to support implementation review or to run a follow-up validation study on the updated flow. A 5-person usability test on the revised results experience would directly measure whether these changes close the comprehension gap.
Legible Research is a specialist UX research practice for consumer health and beauty products. Questions about this report or next steps: hello@legibleresearch.com